00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3694 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3295 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.058 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.058 The recommended git tool is: git 00:00:00.058 using credential 00000000-0000-0000-0000-000000000002 00:00:00.060 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.081 Fetching changes from the remote Git repository 00:00:00.084 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.181 Using shallow fetch with depth 1 00:00:00.181 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.181 > git --version # timeout=10 00:00:00.225 > git --version # 'git version 2.39.2' 00:00:00.225 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.267 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.267 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.471 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.482 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.495 Checking out Revision bd3e126a67c072de18fcd072f7502b1f7801d6ff (FETCH_HEAD) 00:00:04.495 > git config core.sparsecheckout # timeout=10 00:00:04.505 > git read-tree -mu HEAD # timeout=10 00:00:04.520 > git checkout -f bd3e126a67c072de18fcd072f7502b1f7801d6ff # timeout=5 00:00:04.545 Commit message: "jenkins/autotest: add raid-vg subjob to autotest configs" 00:00:04.545 > git rev-list --no-walk 1410c9c474f7ce6874b6ec6ac44d331a6633148e # timeout=10 00:00:04.638 [Pipeline] Start of Pipeline 00:00:04.652 [Pipeline] library 00:00:04.654 Loading library shm_lib@master 00:00:04.654 Library shm_lib@master is cached. Copying from home. 00:00:04.670 [Pipeline] node 00:00:04.678 Running on WFP22 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.680 [Pipeline] { 00:00:04.688 [Pipeline] catchError 00:00:04.689 [Pipeline] { 00:00:04.701 [Pipeline] wrap 00:00:04.710 [Pipeline] { 00:00:04.719 [Pipeline] stage 00:00:04.721 [Pipeline] { (Prologue) 00:00:04.891 [Pipeline] sh 00:00:05.185 + logger -p user.info -t JENKINS-CI 00:00:05.199 [Pipeline] echo 00:00:05.200 Node: WFP22 00:00:05.208 [Pipeline] sh 00:00:05.506 [Pipeline] setCustomBuildProperty 00:00:05.515 [Pipeline] echo 00:00:05.516 Cleanup processes 00:00:05.519 [Pipeline] sh 00:00:05.798 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.799 4141533 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.812 [Pipeline] sh 00:00:06.109 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.109 ++ grep -v 'sudo pgrep' 00:00:06.110 ++ awk '{print $1}' 00:00:06.110 + sudo kill -9 00:00:06.110 + true 00:00:06.126 [Pipeline] cleanWs 00:00:06.135 [WS-CLEANUP] Deleting project workspace... 00:00:06.135 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.142 [WS-CLEANUP] done 00:00:06.147 [Pipeline] setCustomBuildProperty 00:00:06.164 [Pipeline] sh 00:00:06.479 + sudo git config --global --replace-all safe.directory '*' 00:00:06.563 [Pipeline] httpRequest 00:00:06.615 [Pipeline] echo 00:00:06.619 Sorcerer 10.211.164.101 is alive 00:00:06.626 [Pipeline] httpRequest 00:00:06.630 HttpMethod: GET 00:00:06.630 URL: http://10.211.164.101/packages/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:06.630 Sending request to url: http://10.211.164.101/packages/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:06.642 Response Code: HTTP/1.1 200 OK 00:00:06.642 Success: Status code 200 is in the accepted range: 200,404 00:00:06.642 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:09.497 [Pipeline] sh 00:00:09.779 + tar --no-same-owner -xf jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:09.796 [Pipeline] httpRequest 00:00:09.814 [Pipeline] echo 00:00:09.816 Sorcerer 10.211.164.101 is alive 00:00:09.825 [Pipeline] httpRequest 00:00:09.829 HttpMethod: GET 00:00:09.830 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:09.830 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:09.843 Response Code: HTTP/1.1 200 OK 00:00:09.843 Success: Status code 200 is in the accepted range: 200,404 00:00:09.843 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:38.843 [Pipeline] sh 00:00:39.126 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:41.672 [Pipeline] sh 00:00:41.953 + git -C spdk log --oneline -n5 00:00:41.953 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:00:41.953 fc2398dfa raid: clear base bdev configure_cb after executing 00:00:41.953 5558f3f50 raid: complete bdev_raid_create after sb is written 00:00:41.953 d005e023b raid: fix empty slot not updated in sb after resize 00:00:41.953 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:00:41.969 [Pipeline] withCredentials 00:00:41.979 > git --version # timeout=10 00:00:41.990 > git --version # 'git version 2.39.2' 00:00:42.006 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:42.007 [Pipeline] { 00:00:42.016 [Pipeline] retry 00:00:42.017 [Pipeline] { 00:00:42.033 [Pipeline] sh 00:00:42.316 + git ls-remote http://dpdk.org/git/dpdk main 00:00:42.327 [Pipeline] } 00:00:42.348 [Pipeline] // retry 00:00:42.352 [Pipeline] } 00:00:42.366 [Pipeline] // withCredentials 00:00:42.373 [Pipeline] httpRequest 00:00:42.384 [Pipeline] echo 00:00:42.385 Sorcerer 10.211.164.101 is alive 00:00:42.392 [Pipeline] httpRequest 00:00:42.396 HttpMethod: GET 00:00:42.396 URL: http://10.211.164.101/packages/dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:00:42.397 Sending request to url: http://10.211.164.101/packages/dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:00:42.399 Response Code: HTTP/1.1 200 OK 00:00:42.400 Success: Status code 200 is in the accepted range: 200,404 00:00:42.400 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:00:44.408 [Pipeline] sh 00:00:44.732 + tar --no-same-owner -xf dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:00:46.123 [Pipeline] sh 00:00:46.406 + git -C dpdk log --oneline -n5 00:00:46.406 82c47f005b version: 24.07-rc3 00:00:46.406 d9d1be537e doc: remove reference to mbuf pkt field 00:00:46.406 52c7393a03 doc: set required MinGW version in Windows guide 00:00:46.406 92439dc9ac dts: improve starting and stopping interactive shells 00:00:46.406 2b648cd4e4 dts: add context manager for interactive shells 00:00:46.416 [Pipeline] } 00:00:46.435 [Pipeline] // stage 00:00:46.444 [Pipeline] stage 00:00:46.445 [Pipeline] { (Prepare) 00:00:46.467 [Pipeline] writeFile 00:00:46.484 [Pipeline] sh 00:00:46.767 + logger -p user.info -t JENKINS-CI 00:00:46.779 [Pipeline] sh 00:00:47.062 + logger -p user.info -t JENKINS-CI 00:00:47.074 [Pipeline] sh 00:00:47.356 + cat autorun-spdk.conf 00:00:47.357 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:47.357 SPDK_TEST_NVMF=1 00:00:47.357 SPDK_TEST_NVME_CLI=1 00:00:47.357 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:47.357 SPDK_TEST_NVMF_NICS=e810 00:00:47.357 SPDK_TEST_VFIOUSER=1 00:00:47.357 SPDK_RUN_UBSAN=1 00:00:47.357 NET_TYPE=phy 00:00:47.357 SPDK_TEST_NATIVE_DPDK=main 00:00:47.357 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:47.364 RUN_NIGHTLY=1 00:00:47.368 [Pipeline] readFile 00:00:47.393 [Pipeline] withEnv 00:00:47.395 [Pipeline] { 00:00:47.409 [Pipeline] sh 00:00:47.694 + set -ex 00:00:47.694 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:47.694 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:47.694 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:47.694 ++ SPDK_TEST_NVMF=1 00:00:47.694 ++ SPDK_TEST_NVME_CLI=1 00:00:47.694 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:47.694 ++ SPDK_TEST_NVMF_NICS=e810 00:00:47.694 ++ SPDK_TEST_VFIOUSER=1 00:00:47.694 ++ SPDK_RUN_UBSAN=1 00:00:47.694 ++ NET_TYPE=phy 00:00:47.694 ++ SPDK_TEST_NATIVE_DPDK=main 00:00:47.694 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:47.694 ++ RUN_NIGHTLY=1 00:00:47.694 + case $SPDK_TEST_NVMF_NICS in 00:00:47.694 + DRIVERS=ice 00:00:47.694 + [[ tcp == \r\d\m\a ]] 00:00:47.694 + [[ -n ice ]] 00:00:47.694 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:47.694 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:47.694 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:47.694 rmmod: ERROR: Module irdma is not currently loaded 00:00:47.694 rmmod: ERROR: Module i40iw is not currently loaded 00:00:47.694 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:47.694 + true 00:00:47.694 + for D in $DRIVERS 00:00:47.694 + sudo modprobe ice 00:00:47.694 + exit 0 00:00:47.703 [Pipeline] } 00:00:47.718 [Pipeline] // withEnv 00:00:47.723 [Pipeline] } 00:00:47.734 [Pipeline] // stage 00:00:47.742 [Pipeline] catchError 00:00:47.744 [Pipeline] { 00:00:47.758 [Pipeline] timeout 00:00:47.758 Timeout set to expire in 50 min 00:00:47.760 [Pipeline] { 00:00:47.774 [Pipeline] stage 00:00:47.776 [Pipeline] { (Tests) 00:00:47.791 [Pipeline] sh 00:00:48.075 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:48.075 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:48.075 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:48.075 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:48.075 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:48.075 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:48.075 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:48.075 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:48.075 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:48.075 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:48.075 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:48.075 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:48.075 + source /etc/os-release 00:00:48.075 ++ NAME='Fedora Linux' 00:00:48.075 ++ VERSION='38 (Cloud Edition)' 00:00:48.075 ++ ID=fedora 00:00:48.075 ++ VERSION_ID=38 00:00:48.075 ++ VERSION_CODENAME= 00:00:48.075 ++ PLATFORM_ID=platform:f38 00:00:48.075 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:48.075 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:48.075 ++ LOGO=fedora-logo-icon 00:00:48.075 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:48.075 ++ HOME_URL=https://fedoraproject.org/ 00:00:48.075 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:48.075 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:48.075 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:48.075 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:48.075 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:48.075 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:48.075 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:48.075 ++ SUPPORT_END=2024-05-14 00:00:48.075 ++ VARIANT='Cloud Edition' 00:00:48.075 ++ VARIANT_ID=cloud 00:00:48.075 + uname -a 00:00:48.075 Linux spdk-wfp-22 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:48.075 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:50.612 Hugepages 00:00:50.612 node hugesize free / total 00:00:50.612 node0 1048576kB 0 / 0 00:00:50.612 node0 2048kB 0 / 0 00:00:50.612 node1 1048576kB 0 / 0 00:00:50.612 node1 2048kB 0 / 0 00:00:50.612 00:00:50.612 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:50.612 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:50.871 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:50.871 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:50.871 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:50.871 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:50.871 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:50.871 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:50.871 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:50.871 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:50.871 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:50.871 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:50.871 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:50.871 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:50.871 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:50.871 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:50.871 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:50.871 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:50.871 + rm -f /tmp/spdk-ld-path 00:00:50.871 + source autorun-spdk.conf 00:00:50.871 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.871 ++ SPDK_TEST_NVMF=1 00:00:50.871 ++ SPDK_TEST_NVME_CLI=1 00:00:50.871 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:50.871 ++ SPDK_TEST_NVMF_NICS=e810 00:00:50.871 ++ SPDK_TEST_VFIOUSER=1 00:00:50.871 ++ SPDK_RUN_UBSAN=1 00:00:50.871 ++ NET_TYPE=phy 00:00:50.871 ++ SPDK_TEST_NATIVE_DPDK=main 00:00:50.871 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:50.871 ++ RUN_NIGHTLY=1 00:00:50.871 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:50.871 + [[ -n '' ]] 00:00:50.871 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:50.871 + for M in /var/spdk/build-*-manifest.txt 00:00:50.871 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:50.871 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:50.871 + for M in /var/spdk/build-*-manifest.txt 00:00:50.871 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:50.871 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:50.871 ++ uname 00:00:50.871 + [[ Linux == \L\i\n\u\x ]] 00:00:50.871 + sudo dmesg -T 00:00:50.871 + sudo dmesg --clear 00:00:51.130 + dmesg_pid=4142467 00:00:51.130 + [[ Fedora Linux == FreeBSD ]] 00:00:51.130 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:51.130 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:51.130 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:51.130 + [[ -x /usr/src/fio-static/fio ]] 00:00:51.130 + sudo dmesg -Tw 00:00:51.130 + export FIO_BIN=/usr/src/fio-static/fio 00:00:51.130 + FIO_BIN=/usr/src/fio-static/fio 00:00:51.130 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:51.130 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:51.130 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:51.130 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:51.130 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:51.130 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:51.130 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:51.130 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:51.130 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:51.130 Test configuration: 00:00:51.130 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.130 SPDK_TEST_NVMF=1 00:00:51.130 SPDK_TEST_NVME_CLI=1 00:00:51.130 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:51.130 SPDK_TEST_NVMF_NICS=e810 00:00:51.130 SPDK_TEST_VFIOUSER=1 00:00:51.130 SPDK_RUN_UBSAN=1 00:00:51.130 NET_TYPE=phy 00:00:51.130 SPDK_TEST_NATIVE_DPDK=main 00:00:51.130 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:51.130 RUN_NIGHTLY=1 13:28:47 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:51.130 13:28:47 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:51.130 13:28:47 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:51.130 13:28:47 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:51.130 13:28:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:51.130 13:28:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:51.130 13:28:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:51.130 13:28:47 -- paths/export.sh@5 -- $ export PATH 00:00:51.130 13:28:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:51.130 13:28:47 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:51.130 13:28:47 -- common/autobuild_common.sh@447 -- $ date +%s 00:00:51.130 13:28:47 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721906927.XXXXXX 00:00:51.130 13:28:47 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721906927.g9HAoq 00:00:51.130 13:28:47 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:00:51.130 13:28:47 -- common/autobuild_common.sh@453 -- $ '[' -n main ']' 00:00:51.130 13:28:47 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:51.130 13:28:47 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:00:51.130 13:28:47 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:51.130 13:28:47 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:51.130 13:28:47 -- common/autobuild_common.sh@463 -- $ get_config_params 00:00:51.130 13:28:47 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:00:51.130 13:28:47 -- common/autotest_common.sh@10 -- $ set +x 00:00:51.130 13:28:47 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:00:51.130 13:28:47 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:00:51.130 13:28:47 -- pm/common@17 -- $ local monitor 00:00:51.130 13:28:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:51.130 13:28:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:51.130 13:28:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:51.130 13:28:47 -- pm/common@21 -- $ date +%s 00:00:51.130 13:28:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:51.130 13:28:47 -- pm/common@21 -- $ date +%s 00:00:51.130 13:28:47 -- pm/common@25 -- $ sleep 1 00:00:51.130 13:28:47 -- pm/common@21 -- $ date +%s 00:00:51.130 13:28:47 -- pm/common@21 -- $ date +%s 00:00:51.131 13:28:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721906927 00:00:51.131 13:28:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721906927 00:00:51.131 13:28:47 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721906927 00:00:51.131 13:28:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721906927 00:00:51.131 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721906927_collect-cpu-load.pm.log 00:00:51.131 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721906927_collect-vmstat.pm.log 00:00:51.131 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721906927_collect-cpu-temp.pm.log 00:00:51.131 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721906927_collect-bmc-pm.bmc.pm.log 00:00:52.068 13:28:48 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:00:52.068 13:28:48 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:52.068 13:28:48 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:52.068 13:28:48 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:52.068 13:28:48 -- spdk/autobuild.sh@16 -- $ date -u 00:00:52.068 Thu Jul 25 11:28:48 AM UTC 2024 00:00:52.068 13:28:48 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:52.068 v24.09-pre-321-g704257090 00:00:52.068 13:28:48 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:52.068 13:28:48 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:52.068 13:28:48 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:52.068 13:28:48 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:52.068 13:28:48 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:52.068 13:28:48 -- common/autotest_common.sh@10 -- $ set +x 00:00:52.328 ************************************ 00:00:52.328 START TEST ubsan 00:00:52.328 ************************************ 00:00:52.328 13:28:48 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:00:52.328 using ubsan 00:00:52.328 00:00:52.328 real 0m0.001s 00:00:52.328 user 0m0.000s 00:00:52.328 sys 0m0.000s 00:00:52.328 13:28:48 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:00:52.328 13:28:48 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:52.328 ************************************ 00:00:52.328 END TEST ubsan 00:00:52.328 ************************************ 00:00:52.328 13:28:49 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:00:52.328 13:28:49 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:00:52.328 13:28:49 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:00:52.328 13:28:49 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:00:52.328 13:28:49 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:52.328 13:28:49 -- common/autotest_common.sh@10 -- $ set +x 00:00:52.328 ************************************ 00:00:52.328 START TEST build_native_dpdk 00:00:52.328 ************************************ 00:00:52.328 13:28:49 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:00:52.328 13:28:49 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:00:52.328 13:28:49 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:00:52.328 13:28:49 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:00:52.328 13:28:49 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:00:52.328 13:28:49 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:00:52.328 13:28:49 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:00:52.328 13:28:49 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:00:52.328 13:28:49 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:00:52.328 13:28:49 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:00:52.328 13:28:49 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:00:52.328 13:28:49 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:00:52.329 82c47f005b version: 24.07-rc3 00:00:52.329 d9d1be537e doc: remove reference to mbuf pkt field 00:00:52.329 52c7393a03 doc: set required MinGW version in Windows guide 00:00:52.329 92439dc9ac dts: improve starting and stopping interactive shells 00:00:52.329 2b648cd4e4 dts: add context manager for interactive shells 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc3 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc3 21.11.0 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc3 '<' 21.11.0 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:00:52.329 patching file config/rte_config.h 00:00:52.329 Hunk #1 succeeded at 70 (offset 11 lines). 00:00:52.329 13:28:49 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.07.0-rc3 24.07.0 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc3 '<' 24.07.0 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 07 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@350 -- $ local d=07 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@352 -- $ echo 7 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=7 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 07 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@350 -- $ local d=07 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@352 -- $ echo 7 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=7 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 0 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@350 -- $ local d=0 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 0 =~ ^[0-9]+$ ]] 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@352 -- $ echo 0 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=0 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 0 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@350 -- $ local d=0 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 0 =~ ^[0-9]+$ ]] 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@352 -- $ echo 0 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=0 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@362 -- $ decimal rc3 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@350 -- $ local d=rc3 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@351 -- $ [[ rc3 =~ ^[0-9]+$ ]] 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@353 -- $ [[ rc3 =~ ^0x ]] 00:00:52.329 13:28:49 build_native_dpdk -- scripts/common.sh@353 -- $ [[ rc3 =~ ^[a-f0-9]+$ ]] 00:00:52.330 13:28:49 build_native_dpdk -- scripts/common.sh@357 -- $ echo 0 00:00:52.330 13:28:49 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=0 00:00:52.330 13:28:49 build_native_dpdk -- scripts/common.sh@363 -- $ decimal '' 00:00:52.330 13:28:49 build_native_dpdk -- scripts/common.sh@350 -- $ local d= 00:00:52.330 13:28:49 build_native_dpdk -- scripts/common.sh@351 -- $ [[ '' =~ ^[0-9]+$ ]] 00:00:52.330 13:28:49 build_native_dpdk -- scripts/common.sh@353 -- $ [[ '' =~ ^0x ]] 00:00:52.330 13:28:49 build_native_dpdk -- scripts/common.sh@353 -- $ [[ '' =~ ^[a-f0-9]+$ ]] 00:00:52.330 13:28:49 build_native_dpdk -- scripts/common.sh@357 -- $ echo 0 00:00:52.330 13:28:49 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=0 00:00:52.330 13:28:49 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:00:52.330 13:28:49 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:00:52.330 13:28:49 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:00:52.330 13:28:49 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:52.330 13:28:49 build_native_dpdk -- scripts/common.sh@367 -- $ [[ 24 7 0 0 == \2\4\ \7\ \0\ \0 ]] 00:00:52.330 13:28:49 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:00:52.330 13:28:49 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:00:52.330 13:28:49 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:00:52.330 13:28:49 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:00:52.330 13:28:49 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:00:52.330 13:28:49 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:00:57.604 The Meson build system 00:00:57.604 Version: 1.3.1 00:00:57.604 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:57.604 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:00:57.604 Build type: native build 00:00:57.604 Program cat found: YES (/usr/bin/cat) 00:00:57.604 Project name: DPDK 00:00:57.604 Project version: 24.07.0-rc3 00:00:57.604 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:57.604 C linker for the host machine: gcc ld.bfd 2.39-16 00:00:57.604 Host machine cpu family: x86_64 00:00:57.604 Host machine cpu: x86_64 00:00:57.604 Message: ## Building in Developer Mode ## 00:00:57.604 Program pkg-config found: YES (/usr/bin/pkg-config) 00:00:57.604 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:00:57.604 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:00:57.604 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:00:57.604 Program cat found: YES (/usr/bin/cat) 00:00:57.604 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:00:57.604 Compiler for C supports arguments -march=native: YES 00:00:57.604 Checking for size of "void *" : 8 00:00:57.604 Checking for size of "void *" : 8 (cached) 00:00:57.604 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:00:57.604 Library m found: YES 00:00:57.604 Library numa found: YES 00:00:57.604 Has header "numaif.h" : YES 00:00:57.604 Library fdt found: NO 00:00:57.604 Library execinfo found: NO 00:00:57.604 Has header "execinfo.h" : YES 00:00:57.604 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:57.604 Run-time dependency libarchive found: NO (tried pkgconfig) 00:00:57.604 Run-time dependency libbsd found: NO (tried pkgconfig) 00:00:57.604 Run-time dependency jansson found: NO (tried pkgconfig) 00:00:57.604 Run-time dependency openssl found: YES 3.0.9 00:00:57.604 Run-time dependency libpcap found: YES 1.10.4 00:00:57.604 Has header "pcap.h" with dependency libpcap: YES 00:00:57.604 Compiler for C supports arguments -Wcast-qual: YES 00:00:57.604 Compiler for C supports arguments -Wdeprecated: YES 00:00:57.604 Compiler for C supports arguments -Wformat: YES 00:00:57.604 Compiler for C supports arguments -Wformat-nonliteral: NO 00:00:57.604 Compiler for C supports arguments -Wformat-security: NO 00:00:57.604 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:57.604 Compiler for C supports arguments -Wmissing-prototypes: YES 00:00:57.604 Compiler for C supports arguments -Wnested-externs: YES 00:00:57.604 Compiler for C supports arguments -Wold-style-definition: YES 00:00:57.604 Compiler for C supports arguments -Wpointer-arith: YES 00:00:57.604 Compiler for C supports arguments -Wsign-compare: YES 00:00:57.604 Compiler for C supports arguments -Wstrict-prototypes: YES 00:00:57.604 Compiler for C supports arguments -Wundef: YES 00:00:57.604 Compiler for C supports arguments -Wwrite-strings: YES 00:00:57.604 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:00:57.604 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:00:57.604 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:57.604 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:00:57.604 Program objdump found: YES (/usr/bin/objdump) 00:00:57.604 Compiler for C supports arguments -mavx512f: YES 00:00:57.604 Checking if "AVX512 checking" compiles: YES 00:00:57.604 Fetching value of define "__SSE4_2__" : 1 00:00:57.604 Fetching value of define "__AES__" : 1 00:00:57.604 Fetching value of define "__AVX__" : 1 00:00:57.604 Fetching value of define "__AVX2__" : 1 00:00:57.604 Fetching value of define "__AVX512BW__" : 1 00:00:57.604 Fetching value of define "__AVX512CD__" : 1 00:00:57.604 Fetching value of define "__AVX512DQ__" : 1 00:00:57.604 Fetching value of define "__AVX512F__" : 1 00:00:57.604 Fetching value of define "__AVX512VL__" : 1 00:00:57.604 Fetching value of define "__PCLMUL__" : 1 00:00:57.604 Fetching value of define "__RDRND__" : 1 00:00:57.604 Fetching value of define "__RDSEED__" : 1 00:00:57.604 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:00:57.604 Compiler for C supports arguments -Wno-format-truncation: YES 00:00:57.604 Message: lib/log: Defining dependency "log" 00:00:57.604 Message: lib/kvargs: Defining dependency "kvargs" 00:00:57.604 Message: lib/argparse: Defining dependency "argparse" 00:00:57.604 Message: lib/telemetry: Defining dependency "telemetry" 00:00:57.604 Checking for function "getentropy" : NO 00:00:57.604 Message: lib/eal: Defining dependency "eal" 00:00:57.604 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:00:57.604 Message: lib/ring: Defining dependency "ring" 00:00:57.604 Message: lib/rcu: Defining dependency "rcu" 00:00:57.604 Message: lib/mempool: Defining dependency "mempool" 00:00:57.604 Message: lib/mbuf: Defining dependency "mbuf" 00:00:57.604 Fetching value of define "__PCLMUL__" : 1 (cached) 00:00:57.604 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:57.604 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:57.604 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:00:57.604 Fetching value of define "__AVX512VL__" : 1 (cached) 00:00:57.604 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:00:57.604 Compiler for C supports arguments -mpclmul: YES 00:00:57.604 Compiler for C supports arguments -maes: YES 00:00:57.604 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:57.604 Compiler for C supports arguments -mavx512bw: YES 00:00:57.604 Compiler for C supports arguments -mavx512dq: YES 00:00:57.604 Compiler for C supports arguments -mavx512vl: YES 00:00:57.604 Compiler for C supports arguments -mvpclmulqdq: YES 00:00:57.604 Compiler for C supports arguments -mavx2: YES 00:00:57.604 Compiler for C supports arguments -mavx: YES 00:00:57.604 Message: lib/net: Defining dependency "net" 00:00:57.604 Message: lib/meter: Defining dependency "meter" 00:00:57.604 Message: lib/ethdev: Defining dependency "ethdev" 00:00:57.604 Message: lib/pci: Defining dependency "pci" 00:00:57.604 Message: lib/cmdline: Defining dependency "cmdline" 00:00:57.604 Message: lib/metrics: Defining dependency "metrics" 00:00:57.604 Message: lib/hash: Defining dependency "hash" 00:00:57.604 Message: lib/timer: Defining dependency "timer" 00:00:57.604 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:57.604 Fetching value of define "__AVX512VL__" : 1 (cached) 00:00:57.604 Fetching value of define "__AVX512CD__" : 1 (cached) 00:00:57.604 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:57.604 Message: lib/acl: Defining dependency "acl" 00:00:57.604 Message: lib/bbdev: Defining dependency "bbdev" 00:00:57.604 Message: lib/bitratestats: Defining dependency "bitratestats" 00:00:57.604 Run-time dependency libelf found: YES 0.190 00:00:57.604 Message: lib/bpf: Defining dependency "bpf" 00:00:57.604 Message: lib/cfgfile: Defining dependency "cfgfile" 00:00:57.604 Message: lib/compressdev: Defining dependency "compressdev" 00:00:57.604 Message: lib/cryptodev: Defining dependency "cryptodev" 00:00:57.604 Message: lib/distributor: Defining dependency "distributor" 00:00:57.604 Message: lib/dmadev: Defining dependency "dmadev" 00:00:57.604 Message: lib/efd: Defining dependency "efd" 00:00:57.604 Message: lib/eventdev: Defining dependency "eventdev" 00:00:57.604 Message: lib/dispatcher: Defining dependency "dispatcher" 00:00:57.604 Message: lib/gpudev: Defining dependency "gpudev" 00:00:57.604 Message: lib/gro: Defining dependency "gro" 00:00:57.604 Message: lib/gso: Defining dependency "gso" 00:00:57.604 Message: lib/ip_frag: Defining dependency "ip_frag" 00:00:57.604 Message: lib/jobstats: Defining dependency "jobstats" 00:00:57.604 Message: lib/latencystats: Defining dependency "latencystats" 00:00:57.604 Message: lib/lpm: Defining dependency "lpm" 00:00:57.604 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:57.604 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:00:57.604 Fetching value of define "__AVX512IFMA__" : (undefined) 00:00:57.604 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:00:57.604 Message: lib/member: Defining dependency "member" 00:00:57.604 Message: lib/pcapng: Defining dependency "pcapng" 00:00:57.604 Compiler for C supports arguments -Wno-cast-qual: YES 00:00:57.605 Message: lib/power: Defining dependency "power" 00:00:57.605 Message: lib/rawdev: Defining dependency "rawdev" 00:00:57.605 Message: lib/regexdev: Defining dependency "regexdev" 00:00:57.605 Message: lib/mldev: Defining dependency "mldev" 00:00:57.605 Message: lib/rib: Defining dependency "rib" 00:00:57.605 Message: lib/reorder: Defining dependency "reorder" 00:00:57.605 Message: lib/sched: Defining dependency "sched" 00:00:57.605 Message: lib/security: Defining dependency "security" 00:00:57.605 Message: lib/stack: Defining dependency "stack" 00:00:57.605 Has header "linux/userfaultfd.h" : YES 00:00:57.605 Has header "linux/vduse.h" : YES 00:00:57.605 Message: lib/vhost: Defining dependency "vhost" 00:00:57.605 Message: lib/ipsec: Defining dependency "ipsec" 00:00:57.605 Message: lib/pdcp: Defining dependency "pdcp" 00:00:57.605 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:57.605 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:00:57.605 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:57.605 Message: lib/fib: Defining dependency "fib" 00:00:57.605 Message: lib/port: Defining dependency "port" 00:00:57.605 Message: lib/pdump: Defining dependency "pdump" 00:00:57.605 Message: lib/table: Defining dependency "table" 00:00:57.605 Message: lib/pipeline: Defining dependency "pipeline" 00:00:57.605 Message: lib/graph: Defining dependency "graph" 00:00:57.605 Message: lib/node: Defining dependency "node" 00:00:57.605 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:00:57.864 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:00:57.864 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:00:57.864 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:00:57.864 Compiler for C supports arguments -Wno-sign-compare: YES 00:00:57.864 Compiler for C supports arguments -Wno-unused-value: YES 00:00:57.864 Compiler for C supports arguments -Wno-format: YES 00:00:57.864 Compiler for C supports arguments -Wno-format-security: YES 00:00:57.864 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:00:57.864 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:00:57.864 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:00:57.864 Compiler for C supports arguments -Wno-unused-parameter: YES 00:00:57.864 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:57.864 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:57.864 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:57.864 Compiler for C supports arguments -mavx512bw: YES (cached) 00:00:57.864 Compiler for C supports arguments -march=skylake-avx512: YES 00:00:57.864 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:00:57.864 Has header "sys/epoll.h" : YES 00:00:57.864 Program doxygen found: YES (/usr/bin/doxygen) 00:00:57.864 Configuring doxy-api-html.conf using configuration 00:00:57.864 Configuring doxy-api-man.conf using configuration 00:00:57.864 Program mandb found: YES (/usr/bin/mandb) 00:00:57.864 Program sphinx-build found: NO 00:00:57.865 Configuring rte_build_config.h using configuration 00:00:57.865 Message: 00:00:57.865 ================= 00:00:57.865 Applications Enabled 00:00:57.865 ================= 00:00:57.865 00:00:57.865 apps: 00:00:57.865 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:00:57.865 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:00:57.865 test-pmd, test-regex, test-sad, test-security-perf, 00:00:57.865 00:00:57.865 Message: 00:00:57.865 ================= 00:00:57.865 Libraries Enabled 00:00:57.865 ================= 00:00:57.865 00:00:57.865 libs: 00:00:57.865 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:00:57.865 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:00:57.865 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:00:57.865 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:00:57.865 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:00:57.865 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:00:57.865 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:00:57.865 graph, node, 00:00:57.865 00:00:57.865 Message: 00:00:57.865 =============== 00:00:57.865 Drivers Enabled 00:00:57.865 =============== 00:00:57.865 00:00:57.865 common: 00:00:57.865 00:00:57.865 bus: 00:00:57.865 pci, vdev, 00:00:57.865 mempool: 00:00:57.865 ring, 00:00:57.865 dma: 00:00:57.865 00:00:57.865 net: 00:00:57.865 i40e, 00:00:57.865 raw: 00:00:57.865 00:00:57.865 crypto: 00:00:57.865 00:00:57.865 compress: 00:00:57.865 00:00:57.865 regex: 00:00:57.865 00:00:57.865 ml: 00:00:57.865 00:00:57.865 vdpa: 00:00:57.865 00:00:57.865 event: 00:00:57.865 00:00:57.865 baseband: 00:00:57.865 00:00:57.865 gpu: 00:00:57.865 00:00:57.865 00:00:57.865 Message: 00:00:57.865 ================= 00:00:57.865 Content Skipped 00:00:57.865 ================= 00:00:57.865 00:00:57.865 apps: 00:00:57.865 00:00:57.865 libs: 00:00:57.865 00:00:57.865 drivers: 00:00:57.865 common/cpt: not in enabled drivers build config 00:00:57.865 common/dpaax: not in enabled drivers build config 00:00:57.865 common/iavf: not in enabled drivers build config 00:00:57.865 common/idpf: not in enabled drivers build config 00:00:57.865 common/ionic: not in enabled drivers build config 00:00:57.865 common/mvep: not in enabled drivers build config 00:00:57.865 common/octeontx: not in enabled drivers build config 00:00:57.865 bus/auxiliary: not in enabled drivers build config 00:00:57.865 bus/cdx: not in enabled drivers build config 00:00:57.865 bus/dpaa: not in enabled drivers build config 00:00:57.865 bus/fslmc: not in enabled drivers build config 00:00:57.865 bus/ifpga: not in enabled drivers build config 00:00:57.865 bus/platform: not in enabled drivers build config 00:00:57.865 bus/uacce: not in enabled drivers build config 00:00:57.865 bus/vmbus: not in enabled drivers build config 00:00:57.865 common/cnxk: not in enabled drivers build config 00:00:57.865 common/mlx5: not in enabled drivers build config 00:00:57.865 common/nfp: not in enabled drivers build config 00:00:57.865 common/nitrox: not in enabled drivers build config 00:00:57.865 common/qat: not in enabled drivers build config 00:00:57.865 common/sfc_efx: not in enabled drivers build config 00:00:57.865 mempool/bucket: not in enabled drivers build config 00:00:57.865 mempool/cnxk: not in enabled drivers build config 00:00:57.865 mempool/dpaa: not in enabled drivers build config 00:00:57.865 mempool/dpaa2: not in enabled drivers build config 00:00:57.865 mempool/octeontx: not in enabled drivers build config 00:00:57.865 mempool/stack: not in enabled drivers build config 00:00:57.865 dma/cnxk: not in enabled drivers build config 00:00:57.865 dma/dpaa: not in enabled drivers build config 00:00:57.865 dma/dpaa2: not in enabled drivers build config 00:00:57.865 dma/hisilicon: not in enabled drivers build config 00:00:57.865 dma/idxd: not in enabled drivers build config 00:00:57.865 dma/ioat: not in enabled drivers build config 00:00:57.865 dma/odm: not in enabled drivers build config 00:00:57.865 dma/skeleton: not in enabled drivers build config 00:00:57.865 net/af_packet: not in enabled drivers build config 00:00:57.865 net/af_xdp: not in enabled drivers build config 00:00:57.865 net/ark: not in enabled drivers build config 00:00:57.865 net/atlantic: not in enabled drivers build config 00:00:57.865 net/avp: not in enabled drivers build config 00:00:57.865 net/axgbe: not in enabled drivers build config 00:00:57.865 net/bnx2x: not in enabled drivers build config 00:00:57.865 net/bnxt: not in enabled drivers build config 00:00:57.865 net/bonding: not in enabled drivers build config 00:00:57.865 net/cnxk: not in enabled drivers build config 00:00:57.865 net/cpfl: not in enabled drivers build config 00:00:57.865 net/cxgbe: not in enabled drivers build config 00:00:57.865 net/dpaa: not in enabled drivers build config 00:00:57.865 net/dpaa2: not in enabled drivers build config 00:00:57.865 net/e1000: not in enabled drivers build config 00:00:57.865 net/ena: not in enabled drivers build config 00:00:57.865 net/enetc: not in enabled drivers build config 00:00:57.865 net/enetfec: not in enabled drivers build config 00:00:57.865 net/enic: not in enabled drivers build config 00:00:57.865 net/failsafe: not in enabled drivers build config 00:00:57.865 net/fm10k: not in enabled drivers build config 00:00:57.865 net/gve: not in enabled drivers build config 00:00:57.865 net/hinic: not in enabled drivers build config 00:00:57.865 net/hns3: not in enabled drivers build config 00:00:57.865 net/iavf: not in enabled drivers build config 00:00:57.865 net/ice: not in enabled drivers build config 00:00:57.865 net/idpf: not in enabled drivers build config 00:00:57.865 net/igc: not in enabled drivers build config 00:00:57.865 net/ionic: not in enabled drivers build config 00:00:57.865 net/ipn3ke: not in enabled drivers build config 00:00:57.865 net/ixgbe: not in enabled drivers build config 00:00:57.865 net/mana: not in enabled drivers build config 00:00:57.865 net/memif: not in enabled drivers build config 00:00:57.865 net/mlx4: not in enabled drivers build config 00:00:57.865 net/mlx5: not in enabled drivers build config 00:00:57.865 net/mvneta: not in enabled drivers build config 00:00:57.865 net/mvpp2: not in enabled drivers build config 00:00:57.865 net/netvsc: not in enabled drivers build config 00:00:57.865 net/nfb: not in enabled drivers build config 00:00:57.865 net/nfp: not in enabled drivers build config 00:00:57.865 net/ngbe: not in enabled drivers build config 00:00:57.865 net/ntnic: not in enabled drivers build config 00:00:57.865 net/null: not in enabled drivers build config 00:00:57.865 net/octeontx: not in enabled drivers build config 00:00:57.865 net/octeon_ep: not in enabled drivers build config 00:00:57.865 net/pcap: not in enabled drivers build config 00:00:57.865 net/pfe: not in enabled drivers build config 00:00:57.865 net/qede: not in enabled drivers build config 00:00:57.865 net/ring: not in enabled drivers build config 00:00:57.865 net/sfc: not in enabled drivers build config 00:00:57.865 net/softnic: not in enabled drivers build config 00:00:57.865 net/tap: not in enabled drivers build config 00:00:57.865 net/thunderx: not in enabled drivers build config 00:00:57.865 net/txgbe: not in enabled drivers build config 00:00:57.865 net/vdev_netvsc: not in enabled drivers build config 00:00:57.865 net/vhost: not in enabled drivers build config 00:00:57.865 net/virtio: not in enabled drivers build config 00:00:57.865 net/vmxnet3: not in enabled drivers build config 00:00:57.865 raw/cnxk_bphy: not in enabled drivers build config 00:00:57.865 raw/cnxk_gpio: not in enabled drivers build config 00:00:57.865 raw/dpaa2_cmdif: not in enabled drivers build config 00:00:57.865 raw/ifpga: not in enabled drivers build config 00:00:57.865 raw/ntb: not in enabled drivers build config 00:00:57.865 raw/skeleton: not in enabled drivers build config 00:00:57.865 crypto/armv8: not in enabled drivers build config 00:00:57.865 crypto/bcmfs: not in enabled drivers build config 00:00:57.865 crypto/caam_jr: not in enabled drivers build config 00:00:57.865 crypto/ccp: not in enabled drivers build config 00:00:57.865 crypto/cnxk: not in enabled drivers build config 00:00:57.865 crypto/dpaa_sec: not in enabled drivers build config 00:00:57.865 crypto/dpaa2_sec: not in enabled drivers build config 00:00:57.865 crypto/ionic: not in enabled drivers build config 00:00:57.865 crypto/ipsec_mb: not in enabled drivers build config 00:00:57.865 crypto/mlx5: not in enabled drivers build config 00:00:57.865 crypto/mvsam: not in enabled drivers build config 00:00:57.865 crypto/nitrox: not in enabled drivers build config 00:00:57.865 crypto/null: not in enabled drivers build config 00:00:57.865 crypto/octeontx: not in enabled drivers build config 00:00:57.865 crypto/openssl: not in enabled drivers build config 00:00:57.865 crypto/scheduler: not in enabled drivers build config 00:00:57.865 crypto/uadk: not in enabled drivers build config 00:00:57.865 crypto/virtio: not in enabled drivers build config 00:00:57.865 compress/isal: not in enabled drivers build config 00:00:57.865 compress/mlx5: not in enabled drivers build config 00:00:57.866 compress/nitrox: not in enabled drivers build config 00:00:57.866 compress/octeontx: not in enabled drivers build config 00:00:57.866 compress/uadk: not in enabled drivers build config 00:00:57.866 compress/zlib: not in enabled drivers build config 00:00:57.866 regex/mlx5: not in enabled drivers build config 00:00:57.866 regex/cn9k: not in enabled drivers build config 00:00:57.866 ml/cnxk: not in enabled drivers build config 00:00:57.866 vdpa/ifc: not in enabled drivers build config 00:00:57.866 vdpa/mlx5: not in enabled drivers build config 00:00:57.866 vdpa/nfp: not in enabled drivers build config 00:00:57.866 vdpa/sfc: not in enabled drivers build config 00:00:57.866 event/cnxk: not in enabled drivers build config 00:00:57.866 event/dlb2: not in enabled drivers build config 00:00:57.866 event/dpaa: not in enabled drivers build config 00:00:57.866 event/dpaa2: not in enabled drivers build config 00:00:57.866 event/dsw: not in enabled drivers build config 00:00:57.866 event/opdl: not in enabled drivers build config 00:00:57.866 event/skeleton: not in enabled drivers build config 00:00:57.866 event/sw: not in enabled drivers build config 00:00:57.866 event/octeontx: not in enabled drivers build config 00:00:57.866 baseband/acc: not in enabled drivers build config 00:00:57.866 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:00:57.866 baseband/fpga_lte_fec: not in enabled drivers build config 00:00:57.866 baseband/la12xx: not in enabled drivers build config 00:00:57.866 baseband/null: not in enabled drivers build config 00:00:57.866 baseband/turbo_sw: not in enabled drivers build config 00:00:57.866 gpu/cuda: not in enabled drivers build config 00:00:57.866 00:00:57.866 00:00:57.866 Build targets in project: 221 00:00:57.866 00:00:57.866 DPDK 24.07.0-rc3 00:00:57.866 00:00:57.866 User defined options 00:00:57.866 libdir : lib 00:00:57.866 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:57.866 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:00:57.866 c_link_args : 00:00:57.866 enable_docs : false 00:00:57.866 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:00:57.866 enable_kmods : false 00:00:57.866 machine : native 00:00:57.866 tests : false 00:00:57.866 00:00:57.866 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:57.866 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:00:58.139 13:28:54 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j112 00:00:58.139 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:00:58.139 [1/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:00:58.139 [2/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:00:58.405 [3/720] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:00:58.405 [4/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:00:58.405 [5/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:00:58.405 [6/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:00:58.405 [7/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:00:58.405 [8/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:00:58.405 [9/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:00:58.405 [10/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:00:58.405 [11/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:00:58.405 [12/720] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:00:58.405 [13/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:00:58.405 [14/720] Linking static target lib/librte_kvargs.a 00:00:58.405 [15/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:00:58.405 [16/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:00:58.405 [17/720] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:00:58.405 [18/720] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:00:58.405 [19/720] Compiling C object lib/librte_log.a.p/log_log.c.o 00:00:58.405 [20/720] Linking static target lib/librte_pci.a 00:00:58.406 [21/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:00:58.406 [22/720] Linking static target lib/librte_log.a 00:00:58.668 [23/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:00:58.668 [24/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:00:58.668 [25/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:00:58.668 [26/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:00:58.668 [27/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:00:58.668 [28/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:00:58.668 [29/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:00:58.668 [30/720] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:00:58.668 [31/720] Linking static target lib/librte_argparse.a 00:00:58.668 [32/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:00:58.931 [33/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:00:58.931 [34/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:00:58.931 [35/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:00:58.931 [36/720] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:00:58.931 [37/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:00:58.931 [38/720] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:00:58.931 [39/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:00:58.931 [40/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:00:58.931 [41/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:00:58.931 [42/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:00:58.931 [43/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:00:58.931 [44/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:00:58.931 [45/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:00:58.931 [46/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:00:58.931 [47/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:00:58.931 [48/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:00:58.931 [49/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:00:58.931 [50/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:00:58.931 [51/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:00:58.931 [52/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:00:58.931 [53/720] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:00:58.931 [54/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:00:58.931 [55/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:00:58.931 [56/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:00:58.931 [57/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:00:58.931 [58/720] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:00:58.931 [59/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:00:58.931 [60/720] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:00:58.931 [61/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:00:58.931 [62/720] Linking static target lib/librte_meter.a 00:00:58.931 [63/720] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:00:58.931 [64/720] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:00:58.931 [65/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:00:58.931 [66/720] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:00:58.931 [67/720] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:00:58.931 [68/720] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:00:58.931 [69/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:00:58.931 [70/720] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:00:58.931 [71/720] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:00:58.931 [72/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:00:58.931 [73/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:00:58.931 [74/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:00:58.931 [75/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:00:58.931 [76/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:00:59.190 [77/720] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:00:59.190 [78/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:00:59.190 [79/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:00:59.190 [80/720] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:00:59.190 [81/720] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:00:59.190 [82/720] Linking static target lib/librte_cmdline.a 00:00:59.190 [83/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:00:59.190 [84/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:00:59.190 [85/720] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:00:59.190 [86/720] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:00:59.190 [87/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:00:59.190 [88/720] Linking static target lib/librte_ring.a 00:00:59.190 [89/720] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.190 [90/720] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:00:59.190 [91/720] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:00:59.190 [92/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:00:59.190 [93/720] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:00:59.190 [94/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:00:59.190 [95/720] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:00:59.190 [96/720] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:00:59.190 [97/720] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:00:59.190 [98/720] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:00:59.190 [99/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:00:59.190 [100/720] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:00:59.190 [101/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:00:59.190 [102/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:00:59.190 [103/720] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:00:59.190 [104/720] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:00:59.190 [105/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:00:59.190 [106/720] Linking static target lib/net/libnet_crc_avx512_lib.a 00:00:59.190 [107/720] Linking static target lib/librte_metrics.a 00:00:59.190 [108/720] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:00:59.190 [109/720] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:00:59.190 [110/720] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:00:59.190 [111/720] Linking static target lib/librte_net.a 00:00:59.190 [112/720] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:00:59.190 [113/720] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:00:59.190 [114/720] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:00:59.190 [115/720] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.190 [116/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:00:59.190 [117/720] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:00:59.454 [118/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:00:59.454 [119/720] Linking target lib/librte_log.so.24.2 00:00:59.454 [120/720] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:00:59.454 [121/720] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:00:59.454 [122/720] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:00:59.454 [123/720] Linking static target lib/librte_cfgfile.a 00:00:59.454 [124/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:00:59.454 [125/720] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:00:59.454 [126/720] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.454 [127/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:00:59.454 [128/720] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:00:59.454 [129/720] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:00:59.454 [130/720] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:00:59.454 [131/720] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:00:59.454 [132/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:00:59.454 [133/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:00:59.454 [134/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:00:59.454 [135/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:00:59.454 [136/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:00:59.454 [137/720] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:00:59.454 [138/720] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:00:59.454 [139/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:00:59.454 [140/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:00:59.454 [141/720] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.454 [142/720] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:00:59.715 [143/720] Linking static target lib/librte_bitratestats.a 00:00:59.715 [144/720] Linking target lib/librte_kvargs.so.24.2 00:00:59.715 [145/720] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:00:59.715 [146/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:00:59.715 [147/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:00:59.715 [148/720] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:00:59.715 [149/720] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:00:59.715 [150/720] Linking static target lib/librte_timer.a 00:00:59.715 [151/720] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:00:59.715 [152/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:00:59.715 [153/720] Linking static target lib/librte_mempool.a 00:00:59.715 [154/720] Linking target lib/librte_argparse.so.24.2 00:00:59.715 [155/720] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:00:59.715 [156/720] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:00:59.715 [157/720] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.715 [158/720] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:00:59.715 [159/720] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:00:59.715 [160/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:00:59.715 [161/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:00:59.715 [162/720] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:00:59.715 [163/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:00:59.715 [164/720] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:00:59.715 [165/720] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:00:59.715 [166/720] Linking static target lib/librte_jobstats.a 00:00:59.715 [167/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:00:59.715 [168/720] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:00:59.715 [169/720] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:00:59.715 [170/720] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:00:59.715 [171/720] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:00:59.715 [172/720] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.715 [173/720] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:00:59.715 [174/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:00:59.715 [175/720] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:00:59.980 [176/720] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:00:59.980 [177/720] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:00:59.980 [178/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:00:59.980 [179/720] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:00:59.980 [180/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:00:59.980 [181/720] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.980 [182/720] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:00:59.980 [183/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:00:59.980 [184/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:00:59.980 [185/720] Linking static target lib/librte_compressdev.a 00:00:59.980 [186/720] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.980 [187/720] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:00:59.980 [188/720] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:00:59.980 [189/720] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:00:59.980 [190/720] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:00:59.980 [191/720] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:00:59.980 [192/720] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:00:59.980 [193/720] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:00:59.980 [194/720] Linking static target lib/librte_dispatcher.a 00:00:59.980 [195/720] Linking static target lib/member/libsketch_avx512_tmp.a 00:00:59.980 [196/720] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:00:59.980 [197/720] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:00:59.980 [198/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:00:59.980 [199/720] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:00:59.980 [200/720] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:00:59.980 [201/720] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:00:59.980 [202/720] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:00:59.980 [203/720] Linking static target lib/librte_latencystats.a 00:00:59.980 [204/720] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:00:59.980 [205/720] Linking static target lib/librte_telemetry.a 00:00:59.980 [206/720] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:00:59.980 [207/720] Linking static target lib/librte_bbdev.a 00:00:59.980 [208/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:00:59.980 [209/720] Linking static target lib/librte_rcu.a 00:00:59.980 [210/720] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:00:59.980 [211/720] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:00:59.980 [212/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:00:59.980 [213/720] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:00:59.980 [214/720] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:00:59.980 [215/720] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:00:59.980 [216/720] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:00:59.980 [217/720] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:00:59.980 [218/720] Linking static target lib/librte_gpudev.a 00:00:59.980 [219/720] Linking static target lib/librte_eal.a 00:01:00.249 [220/720] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:00.249 [221/720] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:00.249 [222/720] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:00.249 [223/720] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:00.249 [224/720] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:00.249 [225/720] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:00.249 [226/720] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:00.249 [227/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:00.249 [228/720] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:00.249 [229/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:00.249 [230/720] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:00.249 [231/720] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:00.249 [232/720] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.249 [233/720] Linking static target lib/librte_gro.a 00:01:00.249 [234/720] Linking static target lib/librte_stack.a 00:01:00.249 [235/720] Linking static target lib/librte_dmadev.a 00:01:00.249 [236/720] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:00.249 [237/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:00.249 [238/720] Linking static target lib/librte_gso.a 00:01:00.249 [239/720] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:00.250 [240/720] Linking static target lib/librte_distributor.a 00:01:00.250 [241/720] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:00.250 [242/720] Linking static target lib/librte_regexdev.a 00:01:00.250 [243/720] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:00.250 [244/720] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:00.250 [245/720] Linking static target lib/librte_mbuf.a 00:01:00.250 [246/720] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.250 [247/720] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:01:00.250 [248/720] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:00.250 [249/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:00.250 [250/720] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:00.250 [251/720] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:00.250 [252/720] Linking static target lib/librte_ip_frag.a 00:01:00.250 [253/720] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:00.250 [254/720] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:00.250 [255/720] Linking static target lib/librte_rawdev.a 00:01:00.250 [256/720] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:00.250 [257/720] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:00.515 [258/720] Linking static target lib/librte_power.a 00:01:00.515 [259/720] Linking static target lib/librte_pcapng.a 00:01:00.515 [260/720] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:00.515 [261/720] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.515 [262/720] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:00.515 [263/720] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:00.515 [264/720] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:00.515 [265/720] Linking static target lib/librte_mldev.a 00:01:00.515 [266/720] Linking static target lib/librte_reorder.a 00:01:00.515 [267/720] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.515 [268/720] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:00.515 [269/720] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:00.515 [270/720] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:00.515 [271/720] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:00.515 [272/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:00.515 [273/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:00.515 [274/720] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:00.515 [275/720] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.515 [276/720] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:00.515 [277/720] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.515 [278/720] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.515 [279/720] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:00.515 [280/720] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:00.515 [281/720] Linking static target lib/librte_security.a 00:01:00.515 [282/720] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:00.515 [283/720] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:00.515 [284/720] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.515 [285/720] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:00.779 [286/720] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:00.779 [287/720] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:00.779 [288/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:00.779 [289/720] Linking static target lib/librte_bpf.a 00:01:00.779 [290/720] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:00.779 [291/720] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.779 [292/720] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:00.779 [293/720] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:00.779 [294/720] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:00.779 [295/720] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:00.779 [296/720] Linking static target lib/librte_lpm.a 00:01:00.779 [297/720] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:00.779 [298/720] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.779 [299/720] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:00.779 [300/720] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.779 [301/720] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.779 [302/720] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:00.779 [303/720] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:00.779 [304/720] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.779 [305/720] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:00.779 [306/720] Linking static target lib/librte_rib.a 00:01:00.779 [307/720] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.779 [308/720] Linking target lib/librte_telemetry.so.24.2 00:01:00.779 [309/720] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.779 [310/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:00.779 [311/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:00.779 [312/720] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:00.779 [313/720] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:01.041 [314/720] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:01.042 [315/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:01.042 [316/720] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:01.042 [317/720] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:01.042 [318/720] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:01.042 [319/720] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:01.042 [320/720] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:01.042 [321/720] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:01.042 [322/720] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.042 [323/720] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:01.042 [324/720] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:01.042 [325/720] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:01:01.042 [326/720] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:01.042 [327/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:01.042 [328/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:01.042 [329/720] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.042 [330/720] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:01.042 [331/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:01.042 [332/720] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.042 [333/720] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:01.042 [334/720] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.301 [335/720] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:01.301 [336/720] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:01.301 [337/720] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:01.301 [338/720] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:01.301 [339/720] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:01.301 [340/720] Linking static target lib/librte_efd.a 00:01:01.301 [341/720] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:01.301 [342/720] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:01.301 [343/720] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.301 [344/720] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:01.301 [345/720] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:01.301 [346/720] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:01.301 [347/720] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.301 [348/720] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:01.301 [349/720] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:01.301 [350/720] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:01.301 [351/720] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:01.301 [352/720] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:01.301 [353/720] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.301 [354/720] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:01:01.301 [355/720] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:01.301 [356/720] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:01.301 [357/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:01.301 [358/720] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:01.301 [359/720] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.301 [360/720] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:01.566 [361/720] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:01.566 [362/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:01.566 [363/720] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:01.566 [364/720] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:01.566 [365/720] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:01.566 [366/720] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:01.566 [367/720] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:01.566 [368/720] Linking static target lib/librte_fib.a 00:01:01.566 [369/720] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:01.566 [370/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:01.566 [371/720] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:01.566 [372/720] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.566 [373/720] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.566 [374/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:01.566 [375/720] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.566 [376/720] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.566 [377/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:01.566 [378/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:01.566 [379/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:01.567 [380/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:01.567 [381/720] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.567 [382/720] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:01.567 [383/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:01.567 [384/720] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:01.567 [385/720] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:01.567 [386/720] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:01.567 [387/720] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:01.567 [388/720] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:01.831 [389/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:01.831 [390/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:01.831 [391/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:01.831 [392/720] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:01.831 [393/720] Linking static target lib/librte_graph.a 00:01:01.831 [394/720] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:01.831 [395/720] Linking static target lib/librte_pdump.a 00:01:01.831 [396/720] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:01.831 [397/720] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:01.831 [398/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:01.831 [399/720] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:01.831 [400/720] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:01.831 [401/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:01.831 [402/720] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:01.832 [403/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:02.094 [404/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:02.094 [405/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:02.094 [406/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:02.094 [407/720] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:02.094 [408/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:02.095 [409/720] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:02.095 [410/720] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:02.095 [411/720] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:02.095 [412/720] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:02.095 [413/720] Linking static target drivers/librte_bus_vdev.a 00:01:02.095 [414/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:02.095 [415/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:02.095 [416/720] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.095 [417/720] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:02.095 [418/720] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:02.095 [419/720] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:02.095 [420/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:02.095 [421/720] Linking static target lib/librte_sched.a 00:01:02.095 [422/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:02.095 [423/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:02.095 [424/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:02.095 [425/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:02.095 [426/720] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:02.095 [427/720] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:02.095 [428/720] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:02.095 [429/720] Linking static target lib/librte_table.a 00:01:02.095 [430/720] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:02.095 [431/720] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:02.095 [432/720] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:02.095 [433/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:02.356 [434/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:02.356 [435/720] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:02.356 [436/720] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:02.356 [437/720] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:02.356 [438/720] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.356 [439/720] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:01:02.356 [440/720] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:02.356 [441/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:02.356 [442/720] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:02.356 [443/720] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:02.356 [444/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:02.356 [445/720] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:02.356 [446/720] Linking static target lib/librte_cryptodev.a 00:01:02.356 [447/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:02.356 [448/720] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:02.356 [449/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:02.356 [450/720] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:02.356 [451/720] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:02.356 [452/720] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:02.356 [453/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:02.356 [454/720] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:02.356 [455/720] Linking static target drivers/librte_bus_pci.a 00:01:02.619 [456/720] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:02.619 [457/720] Linking static target lib/librte_ipsec.a 00:01:02.619 [458/720] Linking static target lib/librte_member.a 00:01:02.619 [459/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:02.619 [460/720] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:02.619 [461/720] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:02.619 [462/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:02.619 [463/720] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:02.619 [464/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:02.619 [465/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:02.619 [466/720] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:02.619 [467/720] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:02.619 [468/720] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.619 [469/720] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:02.619 [470/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:02.619 [471/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:02.619 [472/720] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:02.619 [473/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:02.619 [474/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:02.619 [475/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:02.619 [476/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:02.619 [477/720] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:02.619 [478/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:02.619 [479/720] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:02.619 [480/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:02.619 [481/720] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:02.619 [482/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:02.619 [483/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:02.619 [484/720] Linking static target lib/librte_node.a 00:01:02.619 [485/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:02.878 [486/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:02.878 [487/720] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:02.878 [488/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:02.878 [489/720] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:02.878 [490/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:02.878 [491/720] Linking static target lib/librte_hash.a 00:01:02.878 [492/720] Linking static target lib/librte_pdcp.a 00:01:02.878 [493/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:02.878 [494/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:02.878 [495/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:02.878 [496/720] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.878 [497/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:02.878 [498/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:02.878 [499/720] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:02.878 [500/720] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:02.878 [501/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:02.878 [502/720] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:02.878 [503/720] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:02.878 [504/720] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.878 [505/720] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:02.878 [506/720] Linking static target drivers/librte_mempool_ring.a 00:01:02.878 [507/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:02.878 [508/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:02.878 [509/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:02.878 [510/720] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.878 [511/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:02.878 [512/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:02.878 [513/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:02.878 [514/720] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:02.878 [515/720] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.878 [516/720] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.878 [517/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:02.878 [518/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:02.878 [519/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:02.878 [520/720] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:02.878 [521/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:02.878 [522/720] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:02.878 [523/720] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:03.137 [524/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:03.137 [525/720] Linking static target lib/librte_port.a 00:01:03.137 [526/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:03.137 [527/720] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:03.137 [528/720] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:03.137 [529/720] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:03.137 [530/720] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:03.137 [531/720] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:03.137 [532/720] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.137 [533/720] Linking static target lib/acl/libavx2_tmp.a 00:01:03.137 [534/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:03.137 [535/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:03.137 [536/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:03.137 [537/720] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:03.137 [538/720] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.137 [539/720] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:03.137 [540/720] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:03.137 [541/720] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:03.137 [542/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:03.137 [543/720] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:03.137 [544/720] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.137 [545/720] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:03.137 [546/720] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.137 [547/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:03.137 [548/720] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:03.137 [549/720] Linking static target lib/librte_eventdev.a 00:01:03.137 [550/720] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:03.137 [551/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:03.394 [552/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:03.394 [553/720] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:03.394 [554/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:03.394 [555/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:03.394 [556/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:03.394 [557/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:03.394 [558/720] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:03.394 [559/720] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:01:03.394 [560/720] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:03.394 [561/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:03.394 [562/720] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:03.394 [563/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:03.394 [564/720] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:03.394 [565/720] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:03.394 [566/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:03.653 [567/720] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:03.653 [568/720] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:03.653 [569/720] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:03.653 [570/720] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:03.653 [571/720] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:03.653 [572/720] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:03.653 [573/720] Linking static target lib/librte_acl.a 00:01:03.653 [574/720] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:03.653 [575/720] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.653 [576/720] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:03.653 [577/720] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:03.653 [578/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:03.653 [579/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:03.653 [580/720] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.911 [581/720] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:03.911 [582/720] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:03.911 [583/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:03.911 [584/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:03.911 [585/720] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.170 [586/720] Linking static target lib/librte_ethdev.a 00:01:04.170 [587/720] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.170 [588/720] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:04.429 [589/720] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:04.429 [590/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:04.996 [591/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:04.996 [592/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:04.996 [593/720] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:05.563 [594/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:05.822 [595/720] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:05.822 [596/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:06.081 [597/720] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:06.081 [598/720] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:06.081 [599/720] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:06.081 [600/720] Linking static target drivers/librte_net_i40e.a 00:01:06.646 [601/720] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:06.904 [602/720] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.163 [603/720] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:07.163 [604/720] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.163 [605/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:12.436 [606/720] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.436 [607/720] Linking target lib/librte_eal.so.24.2 00:01:12.436 [608/720] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:01:12.436 [609/720] Linking target lib/librte_cfgfile.so.24.2 00:01:12.436 [610/720] Linking target drivers/librte_bus_vdev.so.24.2 00:01:12.436 [611/720] Linking target lib/librte_ring.so.24.2 00:01:12.436 [612/720] Linking target lib/librte_rawdev.so.24.2 00:01:12.436 [613/720] Linking target lib/librte_pci.so.24.2 00:01:12.436 [614/720] Linking target lib/librte_meter.so.24.2 00:01:12.436 [615/720] Linking target lib/librte_timer.so.24.2 00:01:12.436 [616/720] Linking target lib/librte_dmadev.so.24.2 00:01:12.436 [617/720] Linking target lib/librte_jobstats.so.24.2 00:01:12.436 [618/720] Linking target lib/librte_stack.so.24.2 00:01:12.436 [619/720] Linking target lib/librte_acl.so.24.2 00:01:12.694 [620/720] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:01:12.694 [621/720] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:01:12.694 [622/720] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:01:12.694 [623/720] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:01:12.694 [624/720] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:01:12.694 [625/720] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:01:12.694 [626/720] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:01:12.694 [627/720] Linking target drivers/librte_bus_pci.so.24.2 00:01:12.694 [628/720] Linking target lib/librte_rcu.so.24.2 00:01:12.694 [629/720] Linking target lib/librte_mempool.so.24.2 00:01:12.694 [630/720] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:01:12.694 [631/720] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:01:12.694 [632/720] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:01:12.953 [633/720] Linking target lib/librte_mbuf.so.24.2 00:01:12.953 [634/720] Linking target lib/librte_rib.so.24.2 00:01:12.953 [635/720] Linking target drivers/librte_mempool_ring.so.24.2 00:01:12.953 [636/720] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:01:12.953 [637/720] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:01:12.953 [638/720] Linking target lib/librte_bbdev.so.24.2 00:01:12.953 [639/720] Linking target lib/librte_distributor.so.24.2 00:01:12.953 [640/720] Linking target lib/librte_regexdev.so.24.2 00:01:12.953 [641/720] Linking target lib/librte_net.so.24.2 00:01:12.953 [642/720] Linking target lib/librte_gpudev.so.24.2 00:01:12.953 [643/720] Linking target lib/librte_cryptodev.so.24.2 00:01:12.953 [644/720] Linking target lib/librte_compressdev.so.24.2 00:01:12.953 [645/720] Linking target lib/librte_fib.so.24.2 00:01:12.953 [646/720] Linking target lib/librte_sched.so.24.2 00:01:12.953 [647/720] Linking target lib/librte_reorder.so.24.2 00:01:12.953 [648/720] Linking target lib/librte_mldev.so.24.2 00:01:13.245 [649/720] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.245 [650/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:01:13.245 [651/720] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:01:13.245 [652/720] Linking static target lib/librte_pipeline.a 00:01:13.245 [653/720] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:01:13.245 [654/720] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:01:13.245 [655/720] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:01:13.245 [656/720] Linking target lib/librte_security.so.24.2 00:01:13.245 [657/720] Linking target lib/librte_hash.so.24.2 00:01:13.245 [658/720] Linking target lib/librte_cmdline.so.24.2 00:01:13.245 [659/720] Linking target lib/librte_ethdev.so.24.2 00:01:13.504 [660/720] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:01:13.504 [661/720] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:01:13.504 [662/720] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:01:13.504 [663/720] Linking target lib/librte_efd.so.24.2 00:01:13.504 [664/720] Linking target lib/librte_lpm.so.24.2 00:01:13.504 [665/720] Linking target lib/librte_pdcp.so.24.2 00:01:13.504 [666/720] Linking target lib/librte_member.so.24.2 00:01:13.504 [667/720] Linking target lib/librte_metrics.so.24.2 00:01:13.504 [668/720] Linking target lib/librte_ip_frag.so.24.2 00:01:13.504 [669/720] Linking target lib/librte_ipsec.so.24.2 00:01:13.504 [670/720] Linking target lib/librte_gso.so.24.2 00:01:13.504 [671/720] Linking target lib/librte_gro.so.24.2 00:01:13.504 [672/720] Linking target lib/librte_bpf.so.24.2 00:01:13.504 [673/720] Linking target lib/librte_pcapng.so.24.2 00:01:13.504 [674/720] Linking target lib/librte_power.so.24.2 00:01:13.504 [675/720] Linking target lib/librte_eventdev.so.24.2 00:01:13.504 [676/720] Linking target drivers/librte_net_i40e.so.24.2 00:01:13.504 [677/720] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:01:13.504 [678/720] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:01:13.504 [679/720] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:01:13.504 [680/720] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:01:13.504 [681/720] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:01:13.504 [682/720] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:01:13.504 [683/720] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:01:13.763 [684/720] Linking target lib/librte_graph.so.24.2 00:01:13.763 [685/720] Linking target lib/librte_bitratestats.so.24.2 00:01:13.763 [686/720] Linking target lib/librte_latencystats.so.24.2 00:01:13.763 [687/720] Linking target lib/librte_dispatcher.so.24.2 00:01:13.763 [688/720] Linking target lib/librte_pdump.so.24.2 00:01:13.763 [689/720] Linking target lib/librte_port.so.24.2 00:01:13.763 [690/720] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:01:13.763 [691/720] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:01:13.763 [692/720] Linking target lib/librte_node.so.24.2 00:01:13.763 [693/720] Linking target lib/librte_table.so.24.2 00:01:14.022 [694/720] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:01:14.280 [695/720] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:14.280 [696/720] Linking static target lib/librte_vhost.a 00:01:14.538 [697/720] Linking target app/dpdk-test-fib 00:01:14.797 [698/720] Linking target app/dpdk-test-cmdline 00:01:14.797 [699/720] Linking target app/dpdk-dumpcap 00:01:14.797 [700/720] Linking target app/dpdk-test-acl 00:01:14.797 [701/720] Linking target app/dpdk-test-dma-perf 00:01:14.797 [702/720] Linking target app/dpdk-graph 00:01:14.797 [703/720] Linking target app/dpdk-test-compress-perf 00:01:14.797 [704/720] Linking target app/dpdk-test-mldev 00:01:14.797 [705/720] Linking target app/dpdk-pdump 00:01:14.797 [706/720] Linking target app/dpdk-test-pipeline 00:01:14.797 [707/720] Linking target app/dpdk-proc-info 00:01:14.797 [708/720] Linking target app/dpdk-test-gpudev 00:01:14.797 [709/720] Linking target app/dpdk-test-flow-perf 00:01:14.797 [710/720] Linking target app/dpdk-test-security-perf 00:01:14.797 [711/720] Linking target app/dpdk-test-bbdev 00:01:14.797 [712/720] Linking target app/dpdk-test-crypto-perf 00:01:14.797 [713/720] Linking target app/dpdk-test-eventdev 00:01:14.797 [714/720] Linking target app/dpdk-test-sad 00:01:14.797 [715/720] Linking target app/dpdk-test-regex 00:01:14.797 [716/720] Linking target app/dpdk-testpmd 00:01:16.175 [717/720] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.434 [718/720] Linking target lib/librte_vhost.so.24.2 00:01:19.057 [719/720] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.057 [720/720] Linking target lib/librte_pipeline.so.24.2 00:01:19.057 13:29:15 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:01:19.057 13:29:15 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:01:19.057 13:29:15 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j112 install 00:01:19.057 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:19.057 [0/1] Installing files. 00:01:19.320 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:01:19.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/memory.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:01:19.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/cpu.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:01:19.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/counters.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:01:19.320 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:01:19.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:19.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:19.320 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:19.321 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:19.322 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:19.323 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:19.324 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:19.325 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:19.325 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:19.325 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:19.325 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:19.325 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:19.325 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:19.325 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:19.325 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:19.325 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:19.325 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:19.325 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:19.325 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:19.325 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:19.325 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:19.325 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:19.325 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:19.325 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:19.325 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:19.326 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:19.327 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:19.327 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_argparse.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.327 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.328 Installing lib/librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.590 Installing lib/librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.591 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.591 Installing drivers/librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:01:19.591 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.591 Installing drivers/librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:01:19.591 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.591 Installing drivers/librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:01:19.591 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:19.591 Installing drivers/librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:01:19.591 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:19.591 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:19.591 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:19.591 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:19.591 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:19.591 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:19.591 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:19.591 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:19.591 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:19.591 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:19.591 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:19.591 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:19.591 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:19.591 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:19.591 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:19.591 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:19.591 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:19.591 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:19.591 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:19.591 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/argparse/rte_argparse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ptr_compress/rte_ptr_compress.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry-exporter.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:19.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:19.856 Installing symlink pointing to librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:01:19.856 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:01:19.856 Installing symlink pointing to librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:01:19.856 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:01:19.856 Installing symlink pointing to librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so.24 00:01:19.856 Installing symlink pointing to librte_argparse.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so 00:01:19.856 Installing symlink pointing to librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:01:19.856 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:01:19.856 Installing symlink pointing to librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:01:19.856 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:01:19.856 Installing symlink pointing to librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:01:19.856 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:01:19.856 Installing symlink pointing to librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:01:19.856 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:01:19.856 Installing symlink pointing to librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:01:19.856 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:01:19.856 Installing symlink pointing to librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:01:19.856 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:01:19.856 Installing symlink pointing to librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:01:19.856 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:01:19.856 Installing symlink pointing to librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:01:19.856 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:01:19.856 Installing symlink pointing to librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:01:19.856 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:01:19.856 Installing symlink pointing to librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:01:19.856 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:01:19.856 Installing symlink pointing to librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:01:19.856 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:01:19.856 Installing symlink pointing to librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:01:19.856 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:01:19.856 Installing symlink pointing to librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:01:19.856 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:01:19.856 Installing symlink pointing to librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:01:19.856 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:01:19.856 Installing symlink pointing to librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:01:19.857 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:01:19.857 Installing symlink pointing to librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:01:19.857 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:01:19.857 Installing symlink pointing to librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:01:19.857 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:01:19.857 Installing symlink pointing to librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:01:19.857 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:01:19.857 Installing symlink pointing to librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:01:19.857 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:01:19.857 Installing symlink pointing to librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:01:19.857 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:01:19.857 Installing symlink pointing to librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:01:19.857 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:01:19.857 Installing symlink pointing to librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:01:19.857 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:01:19.857 Installing symlink pointing to librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:01:19.857 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:01:19.857 Installing symlink pointing to librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:01:19.857 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:01:19.857 Installing symlink pointing to librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:01:19.857 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:01:19.857 Installing symlink pointing to librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:01:19.857 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:01:19.857 Installing symlink pointing to librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:01:19.857 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:01:19.857 Installing symlink pointing to librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:01:19.857 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:01:19.857 Installing symlink pointing to librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:01:19.857 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:01:19.857 Installing symlink pointing to librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:01:19.857 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:01:19.857 Installing symlink pointing to librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:01:19.857 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:01:19.857 Installing symlink pointing to librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:01:19.857 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:01:19.857 Installing symlink pointing to librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:01:19.857 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:01:19.857 Installing symlink pointing to librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:01:19.857 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:01:19.857 Installing symlink pointing to librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:01:19.857 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:01:19.857 Installing symlink pointing to librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:01:19.857 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:01:19.857 Installing symlink pointing to librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:01:19.857 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:01:19.857 Installing symlink pointing to librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:01:19.857 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:01:19.857 Installing symlink pointing to librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:01:19.857 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:01:19.857 Installing symlink pointing to librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:01:19.857 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:01:19.857 Installing symlink pointing to librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:01:19.857 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:01:19.857 Installing symlink pointing to librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:01:19.857 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:01:19.857 Installing symlink pointing to librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:01:19.857 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:01:19.857 Installing symlink pointing to librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:01:19.857 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:01:19.857 Installing symlink pointing to librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:01:19.857 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:01:19.857 Installing symlink pointing to librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:01:19.857 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:01:19.857 Installing symlink pointing to librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:01:19.857 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:01:19.857 Installing symlink pointing to librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:01:19.857 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:01:19.857 Installing symlink pointing to librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:01:19.857 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:01:19.857 Installing symlink pointing to librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:01:19.857 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:01:19.857 Installing symlink pointing to librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:01:19.857 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:01:19.857 Installing symlink pointing to librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:01:19.857 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:01:19.857 Installing symlink pointing to librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:01:19.857 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:01:19.857 Installing symlink pointing to librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:01:19.857 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:01:19.857 Installing symlink pointing to librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:01:19.857 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:01:19.857 Installing symlink pointing to librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:01:19.857 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:01:19.857 Installing symlink pointing to librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:01:19.857 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:01:19.857 Installing symlink pointing to librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:01:19.857 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:01:19.857 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:01:19.857 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:01:19.857 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:01:19.858 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:01:19.858 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:01:19.858 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:01:19.858 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:01:19.858 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:01:19.858 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:01:19.858 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:01:19.858 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:01:19.858 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:01:19.858 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:01:19.858 13:29:16 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:01:19.858 13:29:16 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:19.858 00:01:19.858 real 0m27.474s 00:01:19.858 user 8m14.968s 00:01:19.858 sys 2m41.406s 00:01:19.858 13:29:16 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:19.858 13:29:16 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:01:19.858 ************************************ 00:01:19.858 END TEST build_native_dpdk 00:01:19.858 ************************************ 00:01:19.858 13:29:16 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:19.858 13:29:16 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:19.858 13:29:16 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:19.858 13:29:16 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:19.858 13:29:16 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:19.858 13:29:16 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:19.858 13:29:16 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:19.858 13:29:16 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:01:19.858 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:01:20.117 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:20.117 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:20.117 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:20.685 Using 'verbs' RDMA provider 00:01:33.832 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:48.719 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:48.719 Creating mk/config.mk...done. 00:01:48.719 Creating mk/cc.flags.mk...done. 00:01:48.719 Type 'make' to build. 00:01:48.719 13:29:44 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:48.719 13:29:44 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:48.719 13:29:44 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:48.719 13:29:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.719 ************************************ 00:01:48.719 START TEST make 00:01:48.719 ************************************ 00:01:48.719 13:29:44 make -- common/autotest_common.sh@1125 -- $ make -j112 00:01:48.719 make[1]: Nothing to be done for 'all'. 00:01:49.655 The Meson build system 00:01:49.655 Version: 1.3.1 00:01:49.655 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:49.655 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:49.655 Build type: native build 00:01:49.655 Project name: libvfio-user 00:01:49.655 Project version: 0.0.1 00:01:49.655 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:49.655 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:49.655 Host machine cpu family: x86_64 00:01:49.655 Host machine cpu: x86_64 00:01:49.655 Run-time dependency threads found: YES 00:01:49.655 Library dl found: YES 00:01:49.655 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:49.655 Run-time dependency json-c found: YES 0.17 00:01:49.655 Run-time dependency cmocka found: YES 1.1.7 00:01:49.655 Program pytest-3 found: NO 00:01:49.655 Program flake8 found: NO 00:01:49.655 Program misspell-fixer found: NO 00:01:49.655 Program restructuredtext-lint found: NO 00:01:49.655 Program valgrind found: YES (/usr/bin/valgrind) 00:01:49.655 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:49.655 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:49.655 Compiler for C supports arguments -Wwrite-strings: YES 00:01:49.656 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:49.656 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:49.656 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:49.656 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:49.656 Build targets in project: 8 00:01:49.656 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:49.656 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:49.656 00:01:49.656 libvfio-user 0.0.1 00:01:49.656 00:01:49.656 User defined options 00:01:49.656 buildtype : debug 00:01:49.656 default_library: shared 00:01:49.656 libdir : /usr/local/lib 00:01:49.656 00:01:49.656 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:49.914 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:49.914 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:49.914 [2/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:49.914 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:50.172 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:50.172 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:50.172 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:50.172 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:50.172 [8/37] Compiling C object samples/null.p/null.c.o 00:01:50.172 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:50.172 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:50.173 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:50.173 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:50.173 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:50.173 [14/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:50.173 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:50.173 [16/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:50.173 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:50.173 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:50.173 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:50.173 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:50.173 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:50.173 [22/37] Compiling C object samples/server.p/server.c.o 00:01:50.173 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:50.173 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:50.173 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:50.173 [26/37] Compiling C object samples/client.p/client.c.o 00:01:50.173 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:50.173 [28/37] Linking target samples/client 00:01:50.173 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:50.173 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:50.173 [31/37] Linking target test/unit_tests 00:01:50.431 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:50.431 [33/37] Linking target samples/shadow_ioeventfd_server 00:01:50.431 [34/37] Linking target samples/lspci 00:01:50.431 [35/37] Linking target samples/server 00:01:50.431 [36/37] Linking target samples/null 00:01:50.431 [37/37] Linking target samples/gpio-pci-idio-16 00:01:50.431 INFO: autodetecting backend as ninja 00:01:50.431 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:50.431 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:50.690 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:50.690 ninja: no work to do. 00:01:58.807 CC lib/ut/ut.o 00:01:58.807 CC lib/log/log.o 00:01:58.807 CC lib/log/log_flags.o 00:01:58.807 CC lib/log/log_deprecated.o 00:01:58.807 CC lib/ut_mock/mock.o 00:01:58.807 LIB libspdk_ut.a 00:01:58.807 SO libspdk_ut.so.2.0 00:01:58.807 LIB libspdk_ut_mock.a 00:01:58.807 LIB libspdk_log.a 00:01:58.807 SYMLINK libspdk_ut.so 00:01:58.807 SO libspdk_log.so.7.0 00:01:58.807 SO libspdk_ut_mock.so.6.0 00:01:58.807 SYMLINK libspdk_ut_mock.so 00:01:58.807 SYMLINK libspdk_log.so 00:01:59.064 CC lib/dma/dma.o 00:01:59.064 CXX lib/trace_parser/trace.o 00:01:59.064 CC lib/util/base64.o 00:01:59.064 CC lib/util/bit_array.o 00:01:59.064 CC lib/util/cpuset.o 00:01:59.064 CC lib/util/crc16.o 00:01:59.064 CC lib/util/crc32.o 00:01:59.064 CC lib/util/crc32c.o 00:01:59.064 CC lib/util/crc64.o 00:01:59.064 CC lib/util/crc32_ieee.o 00:01:59.064 CC lib/util/dif.o 00:01:59.064 CC lib/ioat/ioat.o 00:01:59.064 CC lib/util/fd.o 00:01:59.064 CC lib/util/fd_group.o 00:01:59.064 CC lib/util/file.o 00:01:59.064 CC lib/util/hexlify.o 00:01:59.064 CC lib/util/iov.o 00:01:59.064 CC lib/util/net.o 00:01:59.064 CC lib/util/math.o 00:01:59.064 CC lib/util/pipe.o 00:01:59.064 CC lib/util/strerror_tls.o 00:01:59.064 CC lib/util/string.o 00:01:59.064 CC lib/util/uuid.o 00:01:59.064 CC lib/util/xor.o 00:01:59.064 CC lib/util/zipf.o 00:01:59.064 LIB libspdk_dma.a 00:01:59.064 CC lib/vfio_user/host/vfio_user_pci.o 00:01:59.064 CC lib/vfio_user/host/vfio_user.o 00:01:59.064 SO libspdk_dma.so.4.0 00:01:59.323 SYMLINK libspdk_dma.so 00:01:59.323 LIB libspdk_ioat.a 00:01:59.323 SO libspdk_ioat.so.7.0 00:01:59.323 SYMLINK libspdk_ioat.so 00:01:59.323 LIB libspdk_vfio_user.a 00:01:59.323 SO libspdk_vfio_user.so.5.0 00:01:59.323 LIB libspdk_util.a 00:01:59.581 SYMLINK libspdk_vfio_user.so 00:01:59.581 SO libspdk_util.so.10.0 00:01:59.581 SYMLINK libspdk_util.so 00:01:59.581 LIB libspdk_trace_parser.a 00:01:59.581 SO libspdk_trace_parser.so.5.0 00:01:59.840 SYMLINK libspdk_trace_parser.so 00:02:00.099 CC lib/env_dpdk/pci.o 00:02:00.099 CC lib/env_dpdk/env.o 00:02:00.099 CC lib/env_dpdk/memory.o 00:02:00.099 CC lib/vmd/vmd.o 00:02:00.099 CC lib/env_dpdk/init.o 00:02:00.099 CC lib/env_dpdk/threads.o 00:02:00.099 CC lib/env_dpdk/pci_vmd.o 00:02:00.099 CC lib/vmd/led.o 00:02:00.099 CC lib/env_dpdk/pci_ioat.o 00:02:00.099 CC lib/env_dpdk/pci_virtio.o 00:02:00.099 CC lib/env_dpdk/pci_event.o 00:02:00.099 CC lib/env_dpdk/pci_idxd.o 00:02:00.099 CC lib/env_dpdk/sigbus_handler.o 00:02:00.099 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:00.099 CC lib/env_dpdk/pci_dpdk.o 00:02:00.099 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:00.099 CC lib/conf/conf.o 00:02:00.099 CC lib/rdma_utils/rdma_utils.o 00:02:00.099 CC lib/rdma_provider/common.o 00:02:00.099 CC lib/json/json_parse.o 00:02:00.099 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:00.099 CC lib/json/json_util.o 00:02:00.099 CC lib/json/json_write.o 00:02:00.099 CC lib/idxd/idxd.o 00:02:00.099 CC lib/idxd/idxd_user.o 00:02:00.099 CC lib/idxd/idxd_kernel.o 00:02:00.099 LIB libspdk_rdma_provider.a 00:02:00.357 SO libspdk_rdma_provider.so.6.0 00:02:00.357 LIB libspdk_conf.a 00:02:00.357 LIB libspdk_rdma_utils.a 00:02:00.357 SO libspdk_conf.so.6.0 00:02:00.357 LIB libspdk_json.a 00:02:00.357 SO libspdk_rdma_utils.so.1.0 00:02:00.357 SYMLINK libspdk_rdma_provider.so 00:02:00.357 SO libspdk_json.so.6.0 00:02:00.357 SYMLINK libspdk_conf.so 00:02:00.357 SYMLINK libspdk_rdma_utils.so 00:02:00.357 SYMLINK libspdk_json.so 00:02:00.357 LIB libspdk_idxd.a 00:02:00.357 LIB libspdk_vmd.a 00:02:00.617 SO libspdk_vmd.so.6.0 00:02:00.617 SO libspdk_idxd.so.12.0 00:02:00.617 SYMLINK libspdk_vmd.so 00:02:00.617 SYMLINK libspdk_idxd.so 00:02:00.876 CC lib/jsonrpc/jsonrpc_server.o 00:02:00.876 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:00.876 CC lib/jsonrpc/jsonrpc_client.o 00:02:00.876 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:00.876 LIB libspdk_env_dpdk.a 00:02:00.876 LIB libspdk_jsonrpc.a 00:02:01.135 SO libspdk_jsonrpc.so.6.0 00:02:01.135 SO libspdk_env_dpdk.so.15.0 00:02:01.135 SYMLINK libspdk_jsonrpc.so 00:02:01.135 SYMLINK libspdk_env_dpdk.so 00:02:01.395 CC lib/rpc/rpc.o 00:02:01.654 LIB libspdk_rpc.a 00:02:01.654 SO libspdk_rpc.so.6.0 00:02:01.654 SYMLINK libspdk_rpc.so 00:02:02.220 CC lib/trace/trace.o 00:02:02.220 CC lib/trace/trace_flags.o 00:02:02.220 CC lib/trace/trace_rpc.o 00:02:02.220 CC lib/keyring/keyring.o 00:02:02.220 CC lib/notify/notify.o 00:02:02.220 CC lib/keyring/keyring_rpc.o 00:02:02.220 CC lib/notify/notify_rpc.o 00:02:02.220 LIB libspdk_notify.a 00:02:02.220 LIB libspdk_trace.a 00:02:02.220 SO libspdk_notify.so.6.0 00:02:02.220 SO libspdk_trace.so.10.0 00:02:02.220 LIB libspdk_keyring.a 00:02:02.220 SYMLINK libspdk_notify.so 00:02:02.479 SO libspdk_keyring.so.1.0 00:02:02.479 SYMLINK libspdk_trace.so 00:02:02.479 SYMLINK libspdk_keyring.so 00:02:02.738 CC lib/sock/sock.o 00:02:02.738 CC lib/sock/sock_rpc.o 00:02:02.738 CC lib/thread/thread.o 00:02:02.738 CC lib/thread/iobuf.o 00:02:02.996 LIB libspdk_sock.a 00:02:02.996 SO libspdk_sock.so.10.0 00:02:02.996 SYMLINK libspdk_sock.so 00:02:03.563 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:03.563 CC lib/nvme/nvme_ctrlr.o 00:02:03.563 CC lib/nvme/nvme_fabric.o 00:02:03.563 CC lib/nvme/nvme_ns_cmd.o 00:02:03.563 CC lib/nvme/nvme_qpair.o 00:02:03.563 CC lib/nvme/nvme_ns.o 00:02:03.563 CC lib/nvme/nvme_pcie_common.o 00:02:03.563 CC lib/nvme/nvme_pcie.o 00:02:03.563 CC lib/nvme/nvme.o 00:02:03.563 CC lib/nvme/nvme_quirks.o 00:02:03.563 CC lib/nvme/nvme_transport.o 00:02:03.563 CC lib/nvme/nvme_discovery.o 00:02:03.563 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:03.563 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:03.563 CC lib/nvme/nvme_tcp.o 00:02:03.563 CC lib/nvme/nvme_opal.o 00:02:03.563 CC lib/nvme/nvme_io_msg.o 00:02:03.563 CC lib/nvme/nvme_poll_group.o 00:02:03.563 CC lib/nvme/nvme_zns.o 00:02:03.563 CC lib/nvme/nvme_stubs.o 00:02:03.563 CC lib/nvme/nvme_auth.o 00:02:03.563 CC lib/nvme/nvme_cuse.o 00:02:03.563 CC lib/nvme/nvme_vfio_user.o 00:02:03.563 CC lib/nvme/nvme_rdma.o 00:02:03.819 LIB libspdk_thread.a 00:02:03.819 SO libspdk_thread.so.10.1 00:02:03.819 SYMLINK libspdk_thread.so 00:02:04.077 CC lib/accel/accel.o 00:02:04.077 CC lib/accel/accel_rpc.o 00:02:04.077 CC lib/accel/accel_sw.o 00:02:04.077 CC lib/vfu_tgt/tgt_endpoint.o 00:02:04.077 CC lib/vfu_tgt/tgt_rpc.o 00:02:04.077 CC lib/blob/request.o 00:02:04.077 CC lib/blob/blobstore.o 00:02:04.077 CC lib/blob/zeroes.o 00:02:04.077 CC lib/blob/blob_bs_dev.o 00:02:04.077 CC lib/init/json_config.o 00:02:04.077 CC lib/init/rpc.o 00:02:04.077 CC lib/init/subsystem.o 00:02:04.077 CC lib/init/subsystem_rpc.o 00:02:04.400 CC lib/virtio/virtio.o 00:02:04.400 CC lib/virtio/virtio_vhost_user.o 00:02:04.400 CC lib/virtio/virtio_vfio_user.o 00:02:04.400 CC lib/virtio/virtio_pci.o 00:02:04.400 LIB libspdk_init.a 00:02:04.400 LIB libspdk_vfu_tgt.a 00:02:04.400 SO libspdk_init.so.5.0 00:02:04.400 LIB libspdk_virtio.a 00:02:04.401 SO libspdk_vfu_tgt.so.3.0 00:02:04.659 SO libspdk_virtio.so.7.0 00:02:04.659 SYMLINK libspdk_init.so 00:02:04.659 SYMLINK libspdk_vfu_tgt.so 00:02:04.659 SYMLINK libspdk_virtio.so 00:02:04.917 LIB libspdk_accel.a 00:02:04.917 CC lib/event/app.o 00:02:04.917 CC lib/event/app_rpc.o 00:02:04.917 CC lib/event/reactor.o 00:02:04.917 CC lib/event/log_rpc.o 00:02:04.917 CC lib/event/scheduler_static.o 00:02:04.917 SO libspdk_accel.so.16.0 00:02:04.917 SYMLINK libspdk_accel.so 00:02:04.917 LIB libspdk_nvme.a 00:02:05.175 SO libspdk_nvme.so.13.1 00:02:05.175 LIB libspdk_event.a 00:02:05.175 SO libspdk_event.so.14.0 00:02:05.434 CC lib/bdev/bdev.o 00:02:05.434 CC lib/bdev/bdev_rpc.o 00:02:05.434 CC lib/bdev/part.o 00:02:05.434 CC lib/bdev/bdev_zone.o 00:02:05.434 CC lib/bdev/scsi_nvme.o 00:02:05.434 SYMLINK libspdk_event.so 00:02:05.434 SYMLINK libspdk_nvme.so 00:02:06.369 LIB libspdk_blob.a 00:02:06.369 SO libspdk_blob.so.11.0 00:02:06.369 SYMLINK libspdk_blob.so 00:02:06.630 CC lib/lvol/lvol.o 00:02:06.630 CC lib/blobfs/blobfs.o 00:02:06.630 CC lib/blobfs/tree.o 00:02:07.203 LIB libspdk_bdev.a 00:02:07.203 SO libspdk_bdev.so.16.0 00:02:07.203 SYMLINK libspdk_bdev.so 00:02:07.203 LIB libspdk_blobfs.a 00:02:07.462 LIB libspdk_lvol.a 00:02:07.462 SO libspdk_blobfs.so.10.0 00:02:07.462 SO libspdk_lvol.so.10.0 00:02:07.462 SYMLINK libspdk_blobfs.so 00:02:07.462 SYMLINK libspdk_lvol.so 00:02:07.462 CC lib/scsi/lun.o 00:02:07.721 CC lib/scsi/dev.o 00:02:07.721 CC lib/nvmf/ctrlr.o 00:02:07.721 CC lib/scsi/port.o 00:02:07.721 CC lib/scsi/scsi.o 00:02:07.721 CC lib/scsi/scsi_bdev.o 00:02:07.721 CC lib/scsi/scsi_pr.o 00:02:07.721 CC lib/nvmf/ctrlr_discovery.o 00:02:07.721 CC lib/scsi/scsi_rpc.o 00:02:07.721 CC lib/scsi/task.o 00:02:07.721 CC lib/nvmf/ctrlr_bdev.o 00:02:07.721 CC lib/nvmf/subsystem.o 00:02:07.721 CC lib/nvmf/nvmf_rpc.o 00:02:07.721 CC lib/nvmf/nvmf.o 00:02:07.721 CC lib/nvmf/transport.o 00:02:07.721 CC lib/ftl/ftl_core.o 00:02:07.721 CC lib/nbd/nbd.o 00:02:07.721 CC lib/nvmf/tcp.o 00:02:07.721 CC lib/ftl/ftl_init.o 00:02:07.721 CC lib/nbd/nbd_rpc.o 00:02:07.721 CC lib/ftl/ftl_layout.o 00:02:07.721 CC lib/nvmf/stubs.o 00:02:07.721 CC lib/nvmf/mdns_server.o 00:02:07.721 CC lib/ftl/ftl_debug.o 00:02:07.721 CC lib/nvmf/vfio_user.o 00:02:07.721 CC lib/ftl/ftl_io.o 00:02:07.721 CC lib/nvmf/rdma.o 00:02:07.721 CC lib/ftl/ftl_sb.o 00:02:07.721 CC lib/nvmf/auth.o 00:02:07.721 CC lib/ublk/ublk.o 00:02:07.721 CC lib/ftl/ftl_l2p.o 00:02:07.721 CC lib/ublk/ublk_rpc.o 00:02:07.721 CC lib/ftl/ftl_l2p_flat.o 00:02:07.721 CC lib/ftl/ftl_nv_cache.o 00:02:07.721 CC lib/ftl/ftl_band.o 00:02:07.721 CC lib/ftl/ftl_band_ops.o 00:02:07.721 CC lib/ftl/ftl_writer.o 00:02:07.721 CC lib/ftl/ftl_rq.o 00:02:07.721 CC lib/ftl/ftl_p2l.o 00:02:07.721 CC lib/ftl/ftl_l2p_cache.o 00:02:07.721 CC lib/ftl/ftl_reloc.o 00:02:07.721 CC lib/ftl/mngt/ftl_mngt.o 00:02:07.721 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:07.721 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:07.721 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:07.721 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:07.721 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:07.721 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:07.721 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:07.721 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:07.721 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:07.721 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:07.721 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:07.721 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:07.721 CC lib/ftl/utils/ftl_md.o 00:02:07.721 CC lib/ftl/utils/ftl_conf.o 00:02:07.721 CC lib/ftl/utils/ftl_bitmap.o 00:02:07.721 CC lib/ftl/utils/ftl_property.o 00:02:07.721 CC lib/ftl/utils/ftl_mempool.o 00:02:07.721 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:07.721 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:07.721 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:07.721 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:07.721 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:07.721 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:07.721 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:07.721 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:07.721 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:07.721 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:07.721 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:07.721 CC lib/ftl/base/ftl_base_dev.o 00:02:07.721 CC lib/ftl/ftl_trace.o 00:02:07.721 CC lib/ftl/base/ftl_base_bdev.o 00:02:07.979 LIB libspdk_nbd.a 00:02:08.237 SO libspdk_nbd.so.7.0 00:02:08.237 LIB libspdk_scsi.a 00:02:08.237 SYMLINK libspdk_nbd.so 00:02:08.237 SO libspdk_scsi.so.9.0 00:02:08.237 LIB libspdk_ublk.a 00:02:08.237 SYMLINK libspdk_scsi.so 00:02:08.495 SO libspdk_ublk.so.3.0 00:02:08.495 SYMLINK libspdk_ublk.so 00:02:08.495 LIB libspdk_ftl.a 00:02:08.753 CC lib/vhost/vhost.o 00:02:08.753 CC lib/iscsi/conn.o 00:02:08.753 CC lib/vhost/vhost_blk.o 00:02:08.753 CC lib/vhost/vhost_rpc.o 00:02:08.753 CC lib/iscsi/init_grp.o 00:02:08.753 CC lib/vhost/vhost_scsi.o 00:02:08.753 CC lib/iscsi/iscsi.o 00:02:08.753 CC lib/vhost/rte_vhost_user.o 00:02:08.753 CC lib/iscsi/md5.o 00:02:08.753 CC lib/iscsi/param.o 00:02:08.753 CC lib/iscsi/portal_grp.o 00:02:08.753 CC lib/iscsi/tgt_node.o 00:02:08.753 CC lib/iscsi/task.o 00:02:08.753 CC lib/iscsi/iscsi_subsystem.o 00:02:08.753 CC lib/iscsi/iscsi_rpc.o 00:02:08.753 SO libspdk_ftl.so.9.0 00:02:09.011 SYMLINK libspdk_ftl.so 00:02:09.270 LIB libspdk_nvmf.a 00:02:09.270 SO libspdk_nvmf.so.19.0 00:02:09.528 LIB libspdk_vhost.a 00:02:09.528 SYMLINK libspdk_nvmf.so 00:02:09.528 SO libspdk_vhost.so.8.0 00:02:09.528 SYMLINK libspdk_vhost.so 00:02:09.787 LIB libspdk_iscsi.a 00:02:09.787 SO libspdk_iscsi.so.8.0 00:02:10.046 SYMLINK libspdk_iscsi.so 00:02:10.613 CC module/vfu_device/vfu_virtio.o 00:02:10.613 CC module/vfu_device/vfu_virtio_scsi.o 00:02:10.613 CC module/vfu_device/vfu_virtio_blk.o 00:02:10.613 CC module/vfu_device/vfu_virtio_rpc.o 00:02:10.613 CC module/env_dpdk/env_dpdk_rpc.o 00:02:10.613 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:10.613 CC module/accel/error/accel_error.o 00:02:10.613 CC module/accel/error/accel_error_rpc.o 00:02:10.613 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:10.613 LIB libspdk_env_dpdk_rpc.a 00:02:10.613 CC module/keyring/linux/keyring_rpc.o 00:02:10.613 CC module/keyring/linux/keyring.o 00:02:10.613 CC module/keyring/file/keyring.o 00:02:10.613 CC module/keyring/file/keyring_rpc.o 00:02:10.613 CC module/accel/iaa/accel_iaa.o 00:02:10.613 CC module/accel/iaa/accel_iaa_rpc.o 00:02:10.613 CC module/sock/posix/posix.o 00:02:10.613 CC module/scheduler/gscheduler/gscheduler.o 00:02:10.613 CC module/blob/bdev/blob_bdev.o 00:02:10.613 CC module/accel/dsa/accel_dsa_rpc.o 00:02:10.613 CC module/accel/dsa/accel_dsa.o 00:02:10.613 CC module/accel/ioat/accel_ioat.o 00:02:10.613 CC module/accel/ioat/accel_ioat_rpc.o 00:02:10.613 SO libspdk_env_dpdk_rpc.so.6.0 00:02:10.613 SYMLINK libspdk_env_dpdk_rpc.so 00:02:10.872 LIB libspdk_keyring_file.a 00:02:10.872 LIB libspdk_accel_error.a 00:02:10.872 LIB libspdk_scheduler_dpdk_governor.a 00:02:10.872 LIB libspdk_keyring_linux.a 00:02:10.872 LIB libspdk_scheduler_gscheduler.a 00:02:10.872 LIB libspdk_scheduler_dynamic.a 00:02:10.872 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:10.872 SO libspdk_keyring_file.so.1.0 00:02:10.872 SO libspdk_keyring_linux.so.1.0 00:02:10.872 LIB libspdk_accel_ioat.a 00:02:10.872 SO libspdk_accel_error.so.2.0 00:02:10.872 LIB libspdk_accel_iaa.a 00:02:10.872 SO libspdk_scheduler_dynamic.so.4.0 00:02:10.872 SO libspdk_scheduler_gscheduler.so.4.0 00:02:10.872 SO libspdk_accel_ioat.so.6.0 00:02:10.872 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:10.872 SO libspdk_accel_iaa.so.3.0 00:02:10.872 SYMLINK libspdk_keyring_file.so 00:02:10.872 SYMLINK libspdk_keyring_linux.so 00:02:10.872 LIB libspdk_accel_dsa.a 00:02:10.872 LIB libspdk_blob_bdev.a 00:02:10.872 SYMLINK libspdk_scheduler_dynamic.so 00:02:10.872 SYMLINK libspdk_accel_error.so 00:02:10.872 SYMLINK libspdk_scheduler_gscheduler.so 00:02:10.872 SYMLINK libspdk_accel_ioat.so 00:02:10.872 SO libspdk_blob_bdev.so.11.0 00:02:10.872 SO libspdk_accel_dsa.so.5.0 00:02:10.872 SYMLINK libspdk_accel_iaa.so 00:02:10.872 LIB libspdk_vfu_device.a 00:02:11.131 SYMLINK libspdk_accel_dsa.so 00:02:11.131 SYMLINK libspdk_blob_bdev.so 00:02:11.131 SO libspdk_vfu_device.so.3.0 00:02:11.131 SYMLINK libspdk_vfu_device.so 00:02:11.131 LIB libspdk_sock_posix.a 00:02:11.131 SO libspdk_sock_posix.so.6.0 00:02:11.390 SYMLINK libspdk_sock_posix.so 00:02:11.390 CC module/bdev/delay/vbdev_delay.o 00:02:11.390 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:11.649 CC module/bdev/gpt/gpt.o 00:02:11.649 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:11.649 CC module/blobfs/bdev/blobfs_bdev.o 00:02:11.649 CC module/bdev/gpt/vbdev_gpt.o 00:02:11.649 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:11.649 CC module/bdev/raid/bdev_raid_rpc.o 00:02:11.649 CC module/bdev/raid/bdev_raid.o 00:02:11.649 CC module/bdev/split/vbdev_split.o 00:02:11.649 CC module/bdev/lvol/vbdev_lvol.o 00:02:11.649 CC module/bdev/split/vbdev_split_rpc.o 00:02:11.649 CC module/bdev/raid/raid0.o 00:02:11.649 CC module/bdev/raid/bdev_raid_sb.o 00:02:11.649 CC module/bdev/error/vbdev_error.o 00:02:11.649 CC module/bdev/raid/concat.o 00:02:11.649 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:11.649 CC module/bdev/raid/raid1.o 00:02:11.649 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:11.649 CC module/bdev/error/vbdev_error_rpc.o 00:02:11.649 CC module/bdev/ftl/bdev_ftl.o 00:02:11.649 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:11.649 CC module/bdev/aio/bdev_aio.o 00:02:11.649 CC module/bdev/aio/bdev_aio_rpc.o 00:02:11.649 CC module/bdev/nvme/bdev_nvme.o 00:02:11.649 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:11.649 CC module/bdev/nvme/nvme_rpc.o 00:02:11.649 CC module/bdev/nvme/bdev_mdns_client.o 00:02:11.649 CC module/bdev/malloc/bdev_malloc.o 00:02:11.649 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:11.649 CC module/bdev/nvme/vbdev_opal.o 00:02:11.649 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:11.649 CC module/bdev/null/bdev_null_rpc.o 00:02:11.649 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:11.649 CC module/bdev/null/bdev_null.o 00:02:11.649 CC module/bdev/passthru/vbdev_passthru.o 00:02:11.649 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:11.649 CC module/bdev/iscsi/bdev_iscsi.o 00:02:11.649 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:11.649 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:11.649 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:11.649 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:11.907 LIB libspdk_blobfs_bdev.a 00:02:11.907 SO libspdk_blobfs_bdev.so.6.0 00:02:11.907 LIB libspdk_bdev_error.a 00:02:11.907 LIB libspdk_bdev_split.a 00:02:11.907 LIB libspdk_bdev_gpt.a 00:02:11.907 SO libspdk_bdev_error.so.6.0 00:02:11.907 SYMLINK libspdk_blobfs_bdev.so 00:02:11.907 SO libspdk_bdev_split.so.6.0 00:02:11.907 LIB libspdk_bdev_ftl.a 00:02:11.907 LIB libspdk_bdev_null.a 00:02:11.907 SO libspdk_bdev_gpt.so.6.0 00:02:11.907 LIB libspdk_bdev_delay.a 00:02:11.907 LIB libspdk_bdev_zone_block.a 00:02:11.907 LIB libspdk_bdev_passthru.a 00:02:11.907 SO libspdk_bdev_ftl.so.6.0 00:02:11.907 SYMLINK libspdk_bdev_error.so 00:02:11.907 LIB libspdk_bdev_aio.a 00:02:11.907 SO libspdk_bdev_null.so.6.0 00:02:11.907 SO libspdk_bdev_zone_block.so.6.0 00:02:11.907 SO libspdk_bdev_delay.so.6.0 00:02:11.907 SYMLINK libspdk_bdev_split.so 00:02:11.907 LIB libspdk_bdev_malloc.a 00:02:11.907 LIB libspdk_bdev_iscsi.a 00:02:11.907 SO libspdk_bdev_passthru.so.6.0 00:02:11.907 SYMLINK libspdk_bdev_gpt.so 00:02:11.907 SO libspdk_bdev_malloc.so.6.0 00:02:11.907 SO libspdk_bdev_aio.so.6.0 00:02:11.907 SYMLINK libspdk_bdev_ftl.so 00:02:11.907 SYMLINK libspdk_bdev_null.so 00:02:11.907 SYMLINK libspdk_bdev_zone_block.so 00:02:11.907 SO libspdk_bdev_iscsi.so.6.0 00:02:11.907 SYMLINK libspdk_bdev_delay.so 00:02:11.907 LIB libspdk_bdev_lvol.a 00:02:11.907 SYMLINK libspdk_bdev_passthru.so 00:02:11.907 SYMLINK libspdk_bdev_aio.so 00:02:12.167 SYMLINK libspdk_bdev_malloc.so 00:02:12.167 LIB libspdk_bdev_virtio.a 00:02:12.167 SYMLINK libspdk_bdev_iscsi.so 00:02:12.167 SO libspdk_bdev_lvol.so.6.0 00:02:12.167 SO libspdk_bdev_virtio.so.6.0 00:02:12.167 SYMLINK libspdk_bdev_lvol.so 00:02:12.167 SYMLINK libspdk_bdev_virtio.so 00:02:12.426 LIB libspdk_bdev_raid.a 00:02:12.426 SO libspdk_bdev_raid.so.6.0 00:02:12.426 SYMLINK libspdk_bdev_raid.so 00:02:13.365 LIB libspdk_bdev_nvme.a 00:02:13.365 SO libspdk_bdev_nvme.so.7.0 00:02:13.365 SYMLINK libspdk_bdev_nvme.so 00:02:13.932 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:13.932 CC module/event/subsystems/vmd/vmd.o 00:02:13.932 CC module/event/subsystems/sock/sock.o 00:02:13.932 CC module/event/subsystems/scheduler/scheduler.o 00:02:14.191 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:14.191 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:14.191 CC module/event/subsystems/iobuf/iobuf.o 00:02:14.191 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:14.191 CC module/event/subsystems/keyring/keyring.o 00:02:14.191 LIB libspdk_event_sock.a 00:02:14.191 LIB libspdk_event_scheduler.a 00:02:14.191 LIB libspdk_event_vhost_blk.a 00:02:14.191 LIB libspdk_event_vmd.a 00:02:14.191 SO libspdk_event_sock.so.5.0 00:02:14.191 SO libspdk_event_scheduler.so.4.0 00:02:14.191 LIB libspdk_event_vfu_tgt.a 00:02:14.191 LIB libspdk_event_keyring.a 00:02:14.191 LIB libspdk_event_iobuf.a 00:02:14.191 SO libspdk_event_vmd.so.6.0 00:02:14.191 SO libspdk_event_vhost_blk.so.3.0 00:02:14.191 SO libspdk_event_keyring.so.1.0 00:02:14.191 SO libspdk_event_vfu_tgt.so.3.0 00:02:14.191 SO libspdk_event_iobuf.so.3.0 00:02:14.191 SYMLINK libspdk_event_sock.so 00:02:14.191 SYMLINK libspdk_event_scheduler.so 00:02:14.191 SYMLINK libspdk_event_vmd.so 00:02:14.191 SYMLINK libspdk_event_vhost_blk.so 00:02:14.451 SYMLINK libspdk_event_keyring.so 00:02:14.451 SYMLINK libspdk_event_vfu_tgt.so 00:02:14.451 SYMLINK libspdk_event_iobuf.so 00:02:14.710 CC module/event/subsystems/accel/accel.o 00:02:14.710 LIB libspdk_event_accel.a 00:02:14.969 SO libspdk_event_accel.so.6.0 00:02:14.969 SYMLINK libspdk_event_accel.so 00:02:15.228 CC module/event/subsystems/bdev/bdev.o 00:02:15.487 LIB libspdk_event_bdev.a 00:02:15.487 SO libspdk_event_bdev.so.6.0 00:02:15.487 SYMLINK libspdk_event_bdev.so 00:02:16.056 CC module/event/subsystems/ublk/ublk.o 00:02:16.056 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:16.056 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:16.056 CC module/event/subsystems/scsi/scsi.o 00:02:16.056 CC module/event/subsystems/nbd/nbd.o 00:02:16.056 LIB libspdk_event_ublk.a 00:02:16.056 SO libspdk_event_ublk.so.3.0 00:02:16.056 LIB libspdk_event_nbd.a 00:02:16.056 LIB libspdk_event_nvmf.a 00:02:16.056 LIB libspdk_event_scsi.a 00:02:16.056 SO libspdk_event_nbd.so.6.0 00:02:16.056 SO libspdk_event_scsi.so.6.0 00:02:16.056 SO libspdk_event_nvmf.so.6.0 00:02:16.056 SYMLINK libspdk_event_ublk.so 00:02:16.315 SYMLINK libspdk_event_nbd.so 00:02:16.315 SYMLINK libspdk_event_scsi.so 00:02:16.315 SYMLINK libspdk_event_nvmf.so 00:02:16.576 CC module/event/subsystems/iscsi/iscsi.o 00:02:16.576 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:16.576 LIB libspdk_event_iscsi.a 00:02:16.835 LIB libspdk_event_vhost_scsi.a 00:02:16.835 SO libspdk_event_iscsi.so.6.0 00:02:16.835 SO libspdk_event_vhost_scsi.so.3.0 00:02:16.835 SYMLINK libspdk_event_iscsi.so 00:02:16.835 SYMLINK libspdk_event_vhost_scsi.so 00:02:17.094 SO libspdk.so.6.0 00:02:17.094 SYMLINK libspdk.so 00:02:17.368 TEST_HEADER include/spdk/accel.h 00:02:17.368 TEST_HEADER include/spdk/assert.h 00:02:17.368 TEST_HEADER include/spdk/accel_module.h 00:02:17.368 TEST_HEADER include/spdk/barrier.h 00:02:17.368 CC app/trace_record/trace_record.o 00:02:17.368 TEST_HEADER include/spdk/base64.h 00:02:17.368 CC test/rpc_client/rpc_client_test.o 00:02:17.368 TEST_HEADER include/spdk/bdev.h 00:02:17.368 TEST_HEADER include/spdk/bdev_zone.h 00:02:17.368 TEST_HEADER include/spdk/bdev_module.h 00:02:17.368 TEST_HEADER include/spdk/bit_array.h 00:02:17.368 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:17.368 TEST_HEADER include/spdk/blob_bdev.h 00:02:17.368 TEST_HEADER include/spdk/bit_pool.h 00:02:17.368 TEST_HEADER include/spdk/blobfs.h 00:02:17.368 TEST_HEADER include/spdk/conf.h 00:02:17.368 CC app/spdk_nvme_identify/identify.o 00:02:17.368 TEST_HEADER include/spdk/blob.h 00:02:17.368 CC app/spdk_lspci/spdk_lspci.o 00:02:17.368 CXX app/trace/trace.o 00:02:17.368 TEST_HEADER include/spdk/config.h 00:02:17.368 TEST_HEADER include/spdk/crc16.h 00:02:17.368 TEST_HEADER include/spdk/cpuset.h 00:02:17.368 TEST_HEADER include/spdk/crc32.h 00:02:17.368 TEST_HEADER include/spdk/crc64.h 00:02:17.368 CC app/spdk_top/spdk_top.o 00:02:17.368 TEST_HEADER include/spdk/dif.h 00:02:17.368 TEST_HEADER include/spdk/dma.h 00:02:17.368 TEST_HEADER include/spdk/endian.h 00:02:17.368 TEST_HEADER include/spdk/env.h 00:02:17.368 TEST_HEADER include/spdk/env_dpdk.h 00:02:17.368 CC app/spdk_nvme_discover/discovery_aer.o 00:02:17.368 TEST_HEADER include/spdk/fd_group.h 00:02:17.368 TEST_HEADER include/spdk/event.h 00:02:17.368 TEST_HEADER include/spdk/fd.h 00:02:17.368 TEST_HEADER include/spdk/file.h 00:02:17.368 CC app/spdk_nvme_perf/perf.o 00:02:17.368 TEST_HEADER include/spdk/ftl.h 00:02:17.368 TEST_HEADER include/spdk/gpt_spec.h 00:02:17.368 TEST_HEADER include/spdk/hexlify.h 00:02:17.368 TEST_HEADER include/spdk/histogram_data.h 00:02:17.368 TEST_HEADER include/spdk/idxd.h 00:02:17.368 TEST_HEADER include/spdk/init.h 00:02:17.368 TEST_HEADER include/spdk/idxd_spec.h 00:02:17.368 TEST_HEADER include/spdk/ioat.h 00:02:17.368 TEST_HEADER include/spdk/ioat_spec.h 00:02:17.368 TEST_HEADER include/spdk/json.h 00:02:17.368 TEST_HEADER include/spdk/iscsi_spec.h 00:02:17.368 TEST_HEADER include/spdk/jsonrpc.h 00:02:17.368 TEST_HEADER include/spdk/keyring.h 00:02:17.368 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:17.368 TEST_HEADER include/spdk/keyring_module.h 00:02:17.368 TEST_HEADER include/spdk/likely.h 00:02:17.368 TEST_HEADER include/spdk/log.h 00:02:17.368 TEST_HEADER include/spdk/lvol.h 00:02:17.368 TEST_HEADER include/spdk/memory.h 00:02:17.368 TEST_HEADER include/spdk/mmio.h 00:02:17.368 TEST_HEADER include/spdk/nbd.h 00:02:17.368 TEST_HEADER include/spdk/net.h 00:02:17.368 TEST_HEADER include/spdk/notify.h 00:02:17.368 TEST_HEADER include/spdk/nvme_intel.h 00:02:17.368 TEST_HEADER include/spdk/nvme.h 00:02:17.368 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:17.368 CC app/spdk_dd/spdk_dd.o 00:02:17.368 TEST_HEADER include/spdk/nvme_zns.h 00:02:17.368 TEST_HEADER include/spdk/nvme_spec.h 00:02:17.368 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:17.368 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:17.368 TEST_HEADER include/spdk/nvmf.h 00:02:17.368 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:17.368 TEST_HEADER include/spdk/nvmf_transport.h 00:02:17.368 TEST_HEADER include/spdk/opal.h 00:02:17.368 TEST_HEADER include/spdk/nvmf_spec.h 00:02:17.368 TEST_HEADER include/spdk/opal_spec.h 00:02:17.368 TEST_HEADER include/spdk/pipe.h 00:02:17.368 CC app/nvmf_tgt/nvmf_main.o 00:02:17.368 TEST_HEADER include/spdk/pci_ids.h 00:02:17.368 TEST_HEADER include/spdk/queue.h 00:02:17.368 TEST_HEADER include/spdk/rpc.h 00:02:17.368 TEST_HEADER include/spdk/reduce.h 00:02:17.368 TEST_HEADER include/spdk/scheduler.h 00:02:17.368 TEST_HEADER include/spdk/scsi.h 00:02:17.368 TEST_HEADER include/spdk/scsi_spec.h 00:02:17.368 TEST_HEADER include/spdk/sock.h 00:02:17.368 TEST_HEADER include/spdk/stdinc.h 00:02:17.368 TEST_HEADER include/spdk/string.h 00:02:17.368 CC app/spdk_tgt/spdk_tgt.o 00:02:17.368 TEST_HEADER include/spdk/thread.h 00:02:17.368 TEST_HEADER include/spdk/trace.h 00:02:17.368 CC app/iscsi_tgt/iscsi_tgt.o 00:02:17.368 TEST_HEADER include/spdk/trace_parser.h 00:02:17.368 TEST_HEADER include/spdk/tree.h 00:02:17.368 TEST_HEADER include/spdk/util.h 00:02:17.368 TEST_HEADER include/spdk/uuid.h 00:02:17.368 TEST_HEADER include/spdk/ublk.h 00:02:17.368 TEST_HEADER include/spdk/version.h 00:02:17.368 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:17.368 TEST_HEADER include/spdk/vhost.h 00:02:17.368 TEST_HEADER include/spdk/vmd.h 00:02:17.368 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:17.369 TEST_HEADER include/spdk/xor.h 00:02:17.369 TEST_HEADER include/spdk/zipf.h 00:02:17.369 CXX test/cpp_headers/accel_module.o 00:02:17.369 CXX test/cpp_headers/accel.o 00:02:17.369 CXX test/cpp_headers/assert.o 00:02:17.369 CXX test/cpp_headers/barrier.o 00:02:17.369 CXX test/cpp_headers/base64.o 00:02:17.369 CXX test/cpp_headers/bdev.o 00:02:17.369 CXX test/cpp_headers/bdev_zone.o 00:02:17.369 CXX test/cpp_headers/bit_array.o 00:02:17.369 CXX test/cpp_headers/bdev_module.o 00:02:17.369 CXX test/cpp_headers/bit_pool.o 00:02:17.676 CXX test/cpp_headers/blobfs_bdev.o 00:02:17.676 CXX test/cpp_headers/blob_bdev.o 00:02:17.676 CXX test/cpp_headers/blobfs.o 00:02:17.676 CXX test/cpp_headers/blob.o 00:02:17.676 CXX test/cpp_headers/config.o 00:02:17.676 CXX test/cpp_headers/cpuset.o 00:02:17.676 CXX test/cpp_headers/conf.o 00:02:17.676 CXX test/cpp_headers/crc16.o 00:02:17.676 CXX test/cpp_headers/crc32.o 00:02:17.676 CXX test/cpp_headers/crc64.o 00:02:17.676 CXX test/cpp_headers/dif.o 00:02:17.676 CXX test/cpp_headers/endian.o 00:02:17.676 CXX test/cpp_headers/dma.o 00:02:17.676 CXX test/cpp_headers/env_dpdk.o 00:02:17.676 CXX test/cpp_headers/env.o 00:02:17.676 CXX test/cpp_headers/event.o 00:02:17.676 CXX test/cpp_headers/fd_group.o 00:02:17.676 CXX test/cpp_headers/ftl.o 00:02:17.676 CXX test/cpp_headers/file.o 00:02:17.676 CXX test/cpp_headers/fd.o 00:02:17.676 CXX test/cpp_headers/gpt_spec.o 00:02:17.676 CXX test/cpp_headers/hexlify.o 00:02:17.676 CXX test/cpp_headers/histogram_data.o 00:02:17.676 CXX test/cpp_headers/idxd.o 00:02:17.676 CXX test/cpp_headers/idxd_spec.o 00:02:17.676 CXX test/cpp_headers/init.o 00:02:17.676 CXX test/cpp_headers/ioat.o 00:02:17.676 CXX test/cpp_headers/ioat_spec.o 00:02:17.676 CXX test/cpp_headers/iscsi_spec.o 00:02:17.676 CXX test/cpp_headers/jsonrpc.o 00:02:17.676 CXX test/cpp_headers/json.o 00:02:17.676 CXX test/cpp_headers/keyring.o 00:02:17.676 CXX test/cpp_headers/log.o 00:02:17.676 CXX test/cpp_headers/lvol.o 00:02:17.676 CXX test/cpp_headers/likely.o 00:02:17.676 CXX test/cpp_headers/keyring_module.o 00:02:17.676 CXX test/cpp_headers/memory.o 00:02:17.676 CXX test/cpp_headers/mmio.o 00:02:17.676 CC test/app/jsoncat/jsoncat.o 00:02:17.676 CXX test/cpp_headers/nbd.o 00:02:17.676 CXX test/cpp_headers/net.o 00:02:17.677 CXX test/cpp_headers/notify.o 00:02:17.677 CXX test/cpp_headers/nvme.o 00:02:17.677 CXX test/cpp_headers/nvme_intel.o 00:02:17.677 CXX test/cpp_headers/nvme_ocssd.o 00:02:17.677 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:17.677 CXX test/cpp_headers/nvme_spec.o 00:02:17.677 CXX test/cpp_headers/nvme_zns.o 00:02:17.677 CXX test/cpp_headers/nvmf_cmd.o 00:02:17.677 CXX test/cpp_headers/nvmf.o 00:02:17.677 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:17.677 CXX test/cpp_headers/nvmf_transport.o 00:02:17.677 CXX test/cpp_headers/nvmf_spec.o 00:02:17.677 CXX test/cpp_headers/opal.o 00:02:17.677 CXX test/cpp_headers/opal_spec.o 00:02:17.677 CC test/app/histogram_perf/histogram_perf.o 00:02:17.677 CXX test/cpp_headers/pci_ids.o 00:02:17.677 CXX test/cpp_headers/pipe.o 00:02:17.677 CC test/app/stub/stub.o 00:02:17.677 CXX test/cpp_headers/queue.o 00:02:17.677 CXX test/cpp_headers/reduce.o 00:02:17.677 CXX test/cpp_headers/rpc.o 00:02:17.677 CXX test/cpp_headers/scheduler.o 00:02:17.677 CXX test/cpp_headers/scsi.o 00:02:17.677 CXX test/cpp_headers/scsi_spec.o 00:02:17.677 CXX test/cpp_headers/sock.o 00:02:17.677 CXX test/cpp_headers/string.o 00:02:17.677 CXX test/cpp_headers/stdinc.o 00:02:17.677 CXX test/cpp_headers/trace.o 00:02:17.677 CXX test/cpp_headers/thread.o 00:02:17.677 CXX test/cpp_headers/tree.o 00:02:17.677 CXX test/cpp_headers/trace_parser.o 00:02:17.677 CXX test/cpp_headers/ublk.o 00:02:17.677 CXX test/cpp_headers/util.o 00:02:17.677 CC examples/ioat/perf/perf.o 00:02:17.677 CC test/thread/poller_perf/poller_perf.o 00:02:17.677 CC test/env/pci/pci_ut.o 00:02:17.677 CC examples/util/zipf/zipf.o 00:02:17.677 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:17.677 CC test/env/memory/memory_ut.o 00:02:17.677 CC examples/ioat/verify/verify.o 00:02:17.677 CC app/fio/nvme/fio_plugin.o 00:02:17.677 CC test/env/vtophys/vtophys.o 00:02:17.677 CC test/app/bdev_svc/bdev_svc.o 00:02:17.677 CXX test/cpp_headers/uuid.o 00:02:17.677 CC test/dma/test_dma/test_dma.o 00:02:17.677 CXX test/cpp_headers/version.o 00:02:17.677 CXX test/cpp_headers/vfio_user_pci.o 00:02:17.677 CC app/fio/bdev/fio_plugin.o 00:02:17.955 LINK spdk_lspci 00:02:17.955 CXX test/cpp_headers/vfio_user_spec.o 00:02:17.955 LINK rpc_client_test 00:02:17.955 LINK spdk_nvme_discover 00:02:18.216 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:18.216 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:18.216 CC test/env/mem_callbacks/mem_callbacks.o 00:02:18.216 LINK interrupt_tgt 00:02:18.216 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:18.216 LINK histogram_perf 00:02:18.216 LINK nvmf_tgt 00:02:18.216 LINK iscsi_tgt 00:02:18.216 LINK zipf 00:02:18.216 CXX test/cpp_headers/vhost.o 00:02:18.216 LINK stub 00:02:18.216 CXX test/cpp_headers/vmd.o 00:02:18.216 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:18.216 CXX test/cpp_headers/xor.o 00:02:18.216 CXX test/cpp_headers/zipf.o 00:02:18.216 LINK env_dpdk_post_init 00:02:18.216 LINK vtophys 00:02:18.216 LINK jsoncat 00:02:18.475 LINK spdk_trace_record 00:02:18.475 LINK poller_perf 00:02:18.475 LINK spdk_tgt 00:02:18.475 LINK verify 00:02:18.475 LINK spdk_dd 00:02:18.475 LINK bdev_svc 00:02:18.475 LINK ioat_perf 00:02:18.475 LINK pci_ut 00:02:18.475 LINK spdk_trace 00:02:18.475 LINK test_dma 00:02:18.733 LINK spdk_nvme 00:02:18.733 LINK spdk_bdev 00:02:18.733 LINK nvme_fuzz 00:02:18.733 LINK spdk_nvme_identify 00:02:18.733 LINK vhost_fuzz 00:02:18.733 LINK spdk_nvme_perf 00:02:18.733 LINK spdk_top 00:02:18.991 LINK mem_callbacks 00:02:18.991 CC examples/sock/hello_world/hello_sock.o 00:02:18.991 CC examples/vmd/lsvmd/lsvmd.o 00:02:18.991 CC examples/vmd/led/led.o 00:02:18.991 CC test/event/reactor/reactor.o 00:02:18.991 CC examples/idxd/perf/perf.o 00:02:18.991 CC app/vhost/vhost.o 00:02:18.991 CC test/event/reactor_perf/reactor_perf.o 00:02:18.991 CC examples/thread/thread/thread_ex.o 00:02:18.991 CC test/event/event_perf/event_perf.o 00:02:18.991 CC test/event/app_repeat/app_repeat.o 00:02:18.991 CC test/event/scheduler/scheduler.o 00:02:18.991 CC test/nvme/compliance/nvme_compliance.o 00:02:18.991 LINK lsvmd 00:02:18.991 CC test/nvme/fused_ordering/fused_ordering.o 00:02:18.991 CC test/nvme/cuse/cuse.o 00:02:18.991 CC test/nvme/sgl/sgl.o 00:02:18.991 CC test/nvme/reserve/reserve.o 00:02:18.991 CC test/nvme/connect_stress/connect_stress.o 00:02:18.991 CC test/nvme/aer/aer.o 00:02:18.991 CC test/nvme/e2edp/nvme_dp.o 00:02:18.991 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:18.991 CC test/nvme/simple_copy/simple_copy.o 00:02:18.991 CC test/nvme/boot_partition/boot_partition.o 00:02:18.991 CC test/nvme/overhead/overhead.o 00:02:18.991 CC test/nvme/err_injection/err_injection.o 00:02:18.991 LINK reactor 00:02:18.991 CC test/nvme/startup/startup.o 00:02:18.991 CC test/nvme/reset/reset.o 00:02:18.991 CC test/blobfs/mkfs/mkfs.o 00:02:19.249 CC test/accel/dif/dif.o 00:02:19.249 CC test/nvme/fdp/fdp.o 00:02:19.249 LINK led 00:02:19.249 LINK event_perf 00:02:19.249 LINK reactor_perf 00:02:19.249 LINK memory_ut 00:02:19.249 LINK hello_sock 00:02:19.249 LINK vhost 00:02:19.249 LINK app_repeat 00:02:19.249 CC test/lvol/esnap/esnap.o 00:02:19.249 LINK scheduler 00:02:19.249 LINK idxd_perf 00:02:19.249 LINK thread 00:02:19.249 LINK connect_stress 00:02:19.249 LINK doorbell_aers 00:02:19.249 LINK startup 00:02:19.249 LINK boot_partition 00:02:19.249 LINK err_injection 00:02:19.249 LINK fused_ordering 00:02:19.249 LINK mkfs 00:02:19.249 LINK reserve 00:02:19.249 LINK simple_copy 00:02:19.249 LINK nvme_dp 00:02:19.249 LINK aer 00:02:19.249 LINK nvme_compliance 00:02:19.249 LINK overhead 00:02:19.507 LINK reset 00:02:19.507 LINK sgl 00:02:19.507 LINK fdp 00:02:19.507 LINK dif 00:02:19.507 LINK iscsi_fuzz 00:02:19.765 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:19.765 CC examples/nvme/reconnect/reconnect.o 00:02:19.765 CC examples/nvme/abort/abort.o 00:02:19.765 CC examples/nvme/hello_world/hello_world.o 00:02:19.765 CC examples/nvme/hotplug/hotplug.o 00:02:19.765 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:19.765 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:19.765 CC examples/nvme/arbitration/arbitration.o 00:02:19.765 CC examples/accel/perf/accel_perf.o 00:02:19.765 LINK pmr_persistence 00:02:19.765 CC examples/blob/hello_world/hello_blob.o 00:02:19.765 CC examples/blob/cli/blobcli.o 00:02:19.765 LINK cmb_copy 00:02:20.022 LINK hello_world 00:02:20.022 LINK hotplug 00:02:20.022 LINK reconnect 00:02:20.022 LINK abort 00:02:20.022 LINK arbitration 00:02:20.022 CC test/bdev/bdevio/bdevio.o 00:02:20.022 LINK nvme_manage 00:02:20.022 LINK hello_blob 00:02:20.022 LINK cuse 00:02:20.279 LINK accel_perf 00:02:20.279 LINK blobcli 00:02:20.610 LINK bdevio 00:02:20.610 CC examples/bdev/hello_world/hello_bdev.o 00:02:20.868 CC examples/bdev/bdevperf/bdevperf.o 00:02:20.868 LINK hello_bdev 00:02:21.433 LINK bdevperf 00:02:22.000 CC examples/nvmf/nvmf/nvmf.o 00:02:22.257 LINK nvmf 00:02:22.516 LINK esnap 00:02:23.084 00:02:23.084 real 0m35.207s 00:02:23.084 user 5m4.177s 00:02:23.084 sys 3m0.095s 00:02:23.084 13:30:19 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:23.084 13:30:19 make -- common/autotest_common.sh@10 -- $ set +x 00:02:23.084 ************************************ 00:02:23.084 END TEST make 00:02:23.084 ************************************ 00:02:23.084 13:30:19 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:23.084 13:30:19 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:23.084 13:30:19 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:23.084 13:30:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.084 13:30:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:23.084 13:30:19 -- pm/common@44 -- $ pid=4142504 00:02:23.084 13:30:19 -- pm/common@50 -- $ kill -TERM 4142504 00:02:23.084 13:30:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.084 13:30:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:23.084 13:30:19 -- pm/common@44 -- $ pid=4142506 00:02:23.084 13:30:19 -- pm/common@50 -- $ kill -TERM 4142506 00:02:23.084 13:30:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.084 13:30:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:23.084 13:30:19 -- pm/common@44 -- $ pid=4142508 00:02:23.084 13:30:19 -- pm/common@50 -- $ kill -TERM 4142508 00:02:23.084 13:30:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.084 13:30:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:23.084 13:30:19 -- pm/common@44 -- $ pid=4142530 00:02:23.084 13:30:19 -- pm/common@50 -- $ sudo -E kill -TERM 4142530 00:02:23.084 13:30:19 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:23.084 13:30:19 -- nvmf/common.sh@7 -- # uname -s 00:02:23.084 13:30:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:23.084 13:30:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:23.084 13:30:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:23.084 13:30:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:23.084 13:30:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:23.084 13:30:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:23.084 13:30:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:23.084 13:30:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:23.084 13:30:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:23.084 13:30:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:23.084 13:30:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:02:23.084 13:30:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:02:23.084 13:30:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:23.084 13:30:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:23.084 13:30:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:23.084 13:30:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:23.084 13:30:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:23.084 13:30:19 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:23.084 13:30:19 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:23.084 13:30:19 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:23.084 13:30:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.084 13:30:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.085 13:30:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.085 13:30:19 -- paths/export.sh@5 -- # export PATH 00:02:23.085 13:30:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.085 13:30:19 -- nvmf/common.sh@47 -- # : 0 00:02:23.085 13:30:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:23.085 13:30:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:23.085 13:30:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:23.085 13:30:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:23.085 13:30:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:23.085 13:30:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:23.085 13:30:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:23.085 13:30:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:23.085 13:30:19 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:23.085 13:30:19 -- spdk/autotest.sh@32 -- # uname -s 00:02:23.085 13:30:19 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:23.085 13:30:19 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:23.085 13:30:19 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:23.085 13:30:19 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:23.085 13:30:19 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:23.085 13:30:19 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:23.085 13:30:19 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:23.085 13:30:19 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:23.085 13:30:19 -- spdk/autotest.sh@48 -- # udevadm_pid=25255 00:02:23.085 13:30:19 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:23.085 13:30:19 -- pm/common@17 -- # local monitor 00:02:23.085 13:30:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.085 13:30:19 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:23.085 13:30:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.085 13:30:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.085 13:30:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.085 13:30:19 -- pm/common@25 -- # sleep 1 00:02:23.085 13:30:19 -- pm/common@21 -- # date +%s 00:02:23.085 13:30:19 -- pm/common@21 -- # date +%s 00:02:23.085 13:30:19 -- pm/common@21 -- # date +%s 00:02:23.085 13:30:19 -- pm/common@21 -- # date +%s 00:02:23.085 13:30:19 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721907019 00:02:23.085 13:30:19 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721907019 00:02:23.085 13:30:19 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721907019 00:02:23.085 13:30:19 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721907019 00:02:23.343 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721907019_collect-vmstat.pm.log 00:02:23.343 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721907019_collect-cpu-load.pm.log 00:02:23.343 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721907019_collect-cpu-temp.pm.log 00:02:23.343 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721907019_collect-bmc-pm.bmc.pm.log 00:02:24.280 13:30:20 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:24.280 13:30:20 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:24.280 13:30:20 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:24.280 13:30:20 -- common/autotest_common.sh@10 -- # set +x 00:02:24.280 13:30:20 -- spdk/autotest.sh@59 -- # create_test_list 00:02:24.280 13:30:20 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:24.280 13:30:20 -- common/autotest_common.sh@10 -- # set +x 00:02:24.281 13:30:20 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:24.281 13:30:20 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:24.281 13:30:20 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:24.281 13:30:20 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:24.281 13:30:20 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:24.281 13:30:20 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:24.281 13:30:20 -- common/autotest_common.sh@1455 -- # uname 00:02:24.281 13:30:20 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:24.281 13:30:20 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:24.281 13:30:20 -- common/autotest_common.sh@1475 -- # uname 00:02:24.281 13:30:21 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:24.281 13:30:21 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:24.281 13:30:21 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:24.281 13:30:21 -- spdk/autotest.sh@72 -- # hash lcov 00:02:24.281 13:30:21 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:24.281 13:30:21 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:24.281 --rc lcov_branch_coverage=1 00:02:24.281 --rc lcov_function_coverage=1 00:02:24.281 --rc genhtml_branch_coverage=1 00:02:24.281 --rc genhtml_function_coverage=1 00:02:24.281 --rc genhtml_legend=1 00:02:24.281 --rc geninfo_all_blocks=1 00:02:24.281 ' 00:02:24.281 13:30:21 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:24.281 --rc lcov_branch_coverage=1 00:02:24.281 --rc lcov_function_coverage=1 00:02:24.281 --rc genhtml_branch_coverage=1 00:02:24.281 --rc genhtml_function_coverage=1 00:02:24.281 --rc genhtml_legend=1 00:02:24.281 --rc geninfo_all_blocks=1 00:02:24.281 ' 00:02:24.281 13:30:21 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:24.281 --rc lcov_branch_coverage=1 00:02:24.281 --rc lcov_function_coverage=1 00:02:24.281 --rc genhtml_branch_coverage=1 00:02:24.281 --rc genhtml_function_coverage=1 00:02:24.281 --rc genhtml_legend=1 00:02:24.281 --rc geninfo_all_blocks=1 00:02:24.281 --no-external' 00:02:24.281 13:30:21 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:24.281 --rc lcov_branch_coverage=1 00:02:24.281 --rc lcov_function_coverage=1 00:02:24.281 --rc genhtml_branch_coverage=1 00:02:24.281 --rc genhtml_function_coverage=1 00:02:24.281 --rc genhtml_legend=1 00:02:24.281 --rc geninfo_all_blocks=1 00:02:24.281 --no-external' 00:02:24.281 13:30:21 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:24.281 lcov: LCOV version 1.14 00:02:24.281 13:30:21 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:36.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:36.489 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:46.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:46.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:46.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:49.059 13:30:45 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:49.059 13:30:45 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:49.059 13:30:45 -- common/autotest_common.sh@10 -- # set +x 00:02:49.059 13:30:45 -- spdk/autotest.sh@91 -- # rm -f 00:02:49.059 13:30:45 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:52.349 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:52.349 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:52.349 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:52.349 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:52.349 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:52.349 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:52.349 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:52.349 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:52.349 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:52.349 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:52.349 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:52.349 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:52.349 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:52.349 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:52.349 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:52.349 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:52.349 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:52.349 13:30:49 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:52.349 13:30:49 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:52.349 13:30:49 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:52.349 13:30:49 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:52.349 13:30:49 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:52.349 13:30:49 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:52.349 13:30:49 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:52.349 13:30:49 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:52.349 13:30:49 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:52.349 13:30:49 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:52.349 13:30:49 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:52.349 13:30:49 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:52.349 13:30:49 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:52.349 13:30:49 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:52.349 13:30:49 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:52.349 No valid GPT data, bailing 00:02:52.349 13:30:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:52.608 13:30:49 -- scripts/common.sh@391 -- # pt= 00:02:52.608 13:30:49 -- scripts/common.sh@392 -- # return 1 00:02:52.608 13:30:49 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:52.608 1+0 records in 00:02:52.608 1+0 records out 00:02:52.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00592955 s, 177 MB/s 00:02:52.608 13:30:49 -- spdk/autotest.sh@118 -- # sync 00:02:52.608 13:30:49 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:52.608 13:30:49 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:52.608 13:30:49 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:59.180 13:30:55 -- spdk/autotest.sh@124 -- # uname -s 00:02:59.180 13:30:55 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:59.180 13:30:55 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:59.180 13:30:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:59.180 13:30:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:59.180 13:30:55 -- common/autotest_common.sh@10 -- # set +x 00:02:59.180 ************************************ 00:02:59.180 START TEST setup.sh 00:02:59.180 ************************************ 00:02:59.180 13:30:55 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:59.180 * Looking for test storage... 00:02:59.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:59.180 13:30:55 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:59.180 13:30:55 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:59.180 13:30:55 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:59.180 13:30:55 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:59.180 13:30:55 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:59.180 13:30:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:59.180 ************************************ 00:02:59.180 START TEST acl 00:02:59.180 ************************************ 00:02:59.180 13:30:55 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:59.180 * Looking for test storage... 00:02:59.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:59.180 13:30:55 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:59.180 13:30:55 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:59.180 13:30:55 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:59.180 13:30:55 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:59.180 13:30:55 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:59.180 13:30:55 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:59.180 13:30:55 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:59.180 13:30:55 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:59.180 13:30:55 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:59.180 13:30:55 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:59.180 13:30:55 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:59.180 13:30:55 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:59.180 13:30:55 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:59.180 13:30:55 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:59.180 13:30:55 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:59.180 13:30:55 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:02.472 13:30:58 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:02.472 13:30:58 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:02.472 13:30:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.472 13:30:58 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:02.472 13:30:58 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.472 13:30:58 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:05.767 Hugepages 00:03:05.767 node hugesize free / total 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.767 00:03:05.767 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:05.767 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.768 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.768 13:31:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.768 13:31:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:05.768 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.768 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.768 13:31:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.768 13:31:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:05.768 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.768 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.768 13:31:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.768 13:31:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:05.768 13:31:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:05.768 13:31:02 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:05.768 13:31:02 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:05.768 13:31:02 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:05.768 13:31:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.768 13:31:02 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:05.768 13:31:02 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:05.768 13:31:02 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:05.768 13:31:02 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:05.768 13:31:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:05.768 ************************************ 00:03:05.768 START TEST denied 00:03:05.768 ************************************ 00:03:05.768 13:31:02 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:05.768 13:31:02 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:05.768 13:31:02 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:05.768 13:31:02 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:05.768 13:31:02 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.768 13:31:02 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:09.151 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:09.151 13:31:05 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:09.151 13:31:05 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:09.151 13:31:05 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:09.151 13:31:05 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:09.151 13:31:05 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:09.151 13:31:05 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:09.151 13:31:05 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:09.151 13:31:05 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:09.151 13:31:05 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:09.152 13:31:05 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:13.346 00:03:13.346 real 0m7.523s 00:03:13.346 user 0m2.176s 00:03:13.346 sys 0m4.592s 00:03:13.346 13:31:09 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:13.346 13:31:09 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:13.346 ************************************ 00:03:13.346 END TEST denied 00:03:13.346 ************************************ 00:03:13.346 13:31:09 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:13.346 13:31:09 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:13.346 13:31:09 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:13.346 13:31:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:13.346 ************************************ 00:03:13.346 START TEST allowed 00:03:13.346 ************************************ 00:03:13.346 13:31:09 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:13.346 13:31:09 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:13.346 13:31:09 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:13.346 13:31:09 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:13.346 13:31:09 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.346 13:31:09 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:18.618 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:18.618 13:31:14 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:18.618 13:31:14 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:18.618 13:31:14 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:18.618 13:31:14 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:18.618 13:31:14 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:21.152 00:03:21.152 real 0m8.053s 00:03:21.152 user 0m2.195s 00:03:21.152 sys 0m4.331s 00:03:21.152 13:31:17 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:21.152 13:31:17 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:21.152 ************************************ 00:03:21.152 END TEST allowed 00:03:21.152 ************************************ 00:03:21.152 00:03:21.152 real 0m22.351s 00:03:21.152 user 0m6.659s 00:03:21.152 sys 0m13.561s 00:03:21.152 13:31:17 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:21.152 13:31:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:21.152 ************************************ 00:03:21.152 END TEST acl 00:03:21.152 ************************************ 00:03:21.152 13:31:18 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:21.152 13:31:18 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:21.152 13:31:18 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:21.152 13:31:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:21.412 ************************************ 00:03:21.412 START TEST hugepages 00:03:21.412 ************************************ 00:03:21.412 13:31:18 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:21.412 * Looking for test storage... 00:03:21.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:21.412 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:21.412 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:21.412 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:21.412 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:21.412 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:21.412 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:21.412 13:31:18 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:21.412 13:31:18 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:21.412 13:31:18 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:21.412 13:31:18 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:21.412 13:31:18 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.412 13:31:18 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.412 13:31:18 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.412 13:31:18 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.412 13:31:18 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.412 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.412 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 40098416 kB' 'MemAvailable: 44017252 kB' 'Buffers: 2704 kB' 'Cached: 11924452 kB' 'SwapCached: 0 kB' 'Active: 8787120 kB' 'Inactive: 3676316 kB' 'Active(anon): 8397256 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539648 kB' 'Mapped: 216464 kB' 'Shmem: 7860976 kB' 'KReclaimable: 500148 kB' 'Slab: 1141112 kB' 'SReclaimable: 500148 kB' 'SUnreclaim: 640964 kB' 'KernelStack: 22032 kB' 'PageTables: 8700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439048 kB' 'Committed_AS: 9820324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216612 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.413 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:21.414 13:31:18 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:21.414 13:31:18 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:21.414 13:31:18 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:21.414 13:31:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:21.414 ************************************ 00:03:21.414 START TEST default_setup 00:03:21.414 ************************************ 00:03:21.414 13:31:18 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:21.415 13:31:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:21.415 13:31:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:21.415 13:31:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:21.415 13:31:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:21.415 13:31:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:21.415 13:31:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:21.415 13:31:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.415 13:31:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:21.415 13:31:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:21.415 13:31:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:21.415 13:31:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.415 13:31:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:21.415 13:31:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.415 13:31:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.415 13:31:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.415 13:31:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:21.415 13:31:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:21.415 13:31:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:21.415 13:31:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:21.415 13:31:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:21.415 13:31:18 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.415 13:31:18 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:24.703 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:24.703 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:24.703 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:24.703 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:24.703 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:24.703 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:24.703 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:24.703 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:24.962 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:24.962 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:24.962 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:24.962 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:24.962 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:24.962 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:24.962 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:24.962 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:26.875 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42268908 kB' 'MemAvailable: 46187860 kB' 'Buffers: 2704 kB' 'Cached: 11924572 kB' 'SwapCached: 0 kB' 'Active: 8801812 kB' 'Inactive: 3676316 kB' 'Active(anon): 8411948 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553244 kB' 'Mapped: 216468 kB' 'Shmem: 7861096 kB' 'KReclaimable: 500264 kB' 'Slab: 1138508 kB' 'SReclaimable: 500264 kB' 'SUnreclaim: 638244 kB' 'KernelStack: 22400 kB' 'PageTables: 9380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9837796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216660 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.875 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.876 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42272100 kB' 'MemAvailable: 46191052 kB' 'Buffers: 2704 kB' 'Cached: 11924576 kB' 'SwapCached: 0 kB' 'Active: 8801124 kB' 'Inactive: 3676316 kB' 'Active(anon): 8411260 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553460 kB' 'Mapped: 216524 kB' 'Shmem: 7861100 kB' 'KReclaimable: 500264 kB' 'Slab: 1138416 kB' 'SReclaimable: 500264 kB' 'SUnreclaim: 638152 kB' 'KernelStack: 22304 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9837816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216628 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.877 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.878 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42270076 kB' 'MemAvailable: 46189028 kB' 'Buffers: 2704 kB' 'Cached: 11924592 kB' 'SwapCached: 0 kB' 'Active: 8801352 kB' 'Inactive: 3676316 kB' 'Active(anon): 8411488 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553756 kB' 'Mapped: 216524 kB' 'Shmem: 7861116 kB' 'KReclaimable: 500264 kB' 'Slab: 1138416 kB' 'SReclaimable: 500264 kB' 'SUnreclaim: 638152 kB' 'KernelStack: 22320 kB' 'PageTables: 8880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9837968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216724 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.879 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.880 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:26.881 nr_hugepages=1024 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.881 resv_hugepages=0 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.881 surplus_hugepages=0 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.881 anon_hugepages=0 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42271004 kB' 'MemAvailable: 46189956 kB' 'Buffers: 2704 kB' 'Cached: 11924616 kB' 'SwapCached: 0 kB' 'Active: 8801152 kB' 'Inactive: 3676316 kB' 'Active(anon): 8411288 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553416 kB' 'Mapped: 216524 kB' 'Shmem: 7861140 kB' 'KReclaimable: 500264 kB' 'Slab: 1138416 kB' 'SReclaimable: 500264 kB' 'SUnreclaim: 638152 kB' 'KernelStack: 22256 kB' 'PageTables: 9080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9837620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216676 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.881 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.882 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 26917092 kB' 'MemUsed: 5674992 kB' 'SwapCached: 0 kB' 'Active: 2283320 kB' 'Inactive: 274308 kB' 'Active(anon): 2123468 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 274308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2406016 kB' 'Mapped: 105508 kB' 'AnonPages: 154876 kB' 'Shmem: 1971856 kB' 'KernelStack: 12328 kB' 'PageTables: 3812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159208 kB' 'Slab: 427100 kB' 'SReclaimable: 159208 kB' 'SUnreclaim: 267892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.883 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:26.884 node0=1024 expecting 1024 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:26.884 00:03:26.884 real 0m5.207s 00:03:26.884 user 0m1.356s 00:03:26.884 sys 0m2.360s 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:26.884 13:31:23 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:26.884 ************************************ 00:03:26.884 END TEST default_setup 00:03:26.884 ************************************ 00:03:26.884 13:31:23 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:26.884 13:31:23 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:26.884 13:31:23 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:26.884 13:31:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:26.884 ************************************ 00:03:26.884 START TEST per_node_1G_alloc 00:03:26.884 ************************************ 00:03:26.884 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:26.884 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:26.884 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:26.884 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:26.884 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:26.884 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:26.884 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:26.884 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:26.884 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.884 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:26.885 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:26.885 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:26.885 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.885 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:26.885 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.885 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.885 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.885 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:26.885 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:26.885 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:26.885 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:26.885 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:26.885 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:26.885 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:26.885 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:26.885 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:26.885 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.885 13:31:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.207 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.207 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.207 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.207 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.207 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.207 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.207 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.207 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.207 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.207 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.207 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.207 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.207 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.207 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.207 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.207 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.207 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42248324 kB' 'MemAvailable: 46167276 kB' 'Buffers: 2704 kB' 'Cached: 11924716 kB' 'SwapCached: 0 kB' 'Active: 8800604 kB' 'Inactive: 3676316 kB' 'Active(anon): 8410740 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552672 kB' 'Mapped: 215424 kB' 'Shmem: 7861240 kB' 'KReclaimable: 500264 kB' 'Slab: 1138244 kB' 'SReclaimable: 500264 kB' 'SUnreclaim: 637980 kB' 'KernelStack: 22336 kB' 'PageTables: 8828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9825324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216772 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.207 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.208 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42247416 kB' 'MemAvailable: 46166368 kB' 'Buffers: 2704 kB' 'Cached: 11924720 kB' 'SwapCached: 0 kB' 'Active: 8799364 kB' 'Inactive: 3676316 kB' 'Active(anon): 8409500 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551444 kB' 'Mapped: 215408 kB' 'Shmem: 7861244 kB' 'KReclaimable: 500264 kB' 'Slab: 1138280 kB' 'SReclaimable: 500264 kB' 'SUnreclaim: 638016 kB' 'KernelStack: 22272 kB' 'PageTables: 8408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9826960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216660 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.209 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.210 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42248252 kB' 'MemAvailable: 46167204 kB' 'Buffers: 2704 kB' 'Cached: 11924736 kB' 'SwapCached: 0 kB' 'Active: 8799056 kB' 'Inactive: 3676316 kB' 'Active(anon): 8409192 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551156 kB' 'Mapped: 215380 kB' 'Shmem: 7861260 kB' 'KReclaimable: 500264 kB' 'Slab: 1138316 kB' 'SReclaimable: 500264 kB' 'SUnreclaim: 638052 kB' 'KernelStack: 22112 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9824388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216612 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.211 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.212 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.213 nr_hugepages=1024 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.213 resv_hugepages=0 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.213 surplus_hugepages=0 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.213 anon_hugepages=0 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42248632 kB' 'MemAvailable: 46167584 kB' 'Buffers: 2704 kB' 'Cached: 11924776 kB' 'SwapCached: 0 kB' 'Active: 8799652 kB' 'Inactive: 3676316 kB' 'Active(anon): 8409788 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551752 kB' 'Mapped: 215380 kB' 'Shmem: 7861300 kB' 'KReclaimable: 500264 kB' 'Slab: 1138316 kB' 'SReclaimable: 500264 kB' 'SUnreclaim: 638052 kB' 'KernelStack: 22160 kB' 'PageTables: 8672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9824912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216628 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.213 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.214 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 27939480 kB' 'MemUsed: 4652604 kB' 'SwapCached: 0 kB' 'Active: 2281616 kB' 'Inactive: 274308 kB' 'Active(anon): 2121764 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 274308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2406072 kB' 'Mapped: 104644 kB' 'AnonPages: 152968 kB' 'Shmem: 1971912 kB' 'KernelStack: 12168 kB' 'PageTables: 3660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159208 kB' 'Slab: 427160 kB' 'SReclaimable: 159208 kB' 'SUnreclaim: 267952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.215 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.216 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.217 13:31:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 14309788 kB' 'MemUsed: 13393320 kB' 'SwapCached: 0 kB' 'Active: 6518020 kB' 'Inactive: 3402008 kB' 'Active(anon): 6288008 kB' 'Inactive(anon): 0 kB' 'Active(file): 230012 kB' 'Inactive(file): 3402008 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9521448 kB' 'Mapped: 110736 kB' 'AnonPages: 398772 kB' 'Shmem: 5889428 kB' 'KernelStack: 9976 kB' 'PageTables: 4940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 341056 kB' 'Slab: 711156 kB' 'SReclaimable: 341056 kB' 'SUnreclaim: 370100 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.217 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:30.218 node0=512 expecting 512 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:30.218 node1=512 expecting 512 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:30.218 00:03:30.218 real 0m3.466s 00:03:30.218 user 0m1.321s 00:03:30.218 sys 0m2.213s 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:30.218 13:31:27 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:30.218 ************************************ 00:03:30.218 END TEST per_node_1G_alloc 00:03:30.218 ************************************ 00:03:30.218 13:31:27 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:30.218 13:31:27 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:30.218 13:31:27 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:30.218 13:31:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:30.478 ************************************ 00:03:30.478 START TEST even_2G_alloc 00:03:30.478 ************************************ 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.478 13:31:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:33.773 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:33.773 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:33.773 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:33.773 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:33.773 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:33.773 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:33.773 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:33.773 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:33.773 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:33.773 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:33.773 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:33.773 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:33.773 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:33.773 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:33.773 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:33.773 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:33.773 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:33.773 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:33.773 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:33.773 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:33.773 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:33.773 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:33.773 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:33.773 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:33.773 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:33.773 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:33.773 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:33.773 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:33.773 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:33.773 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.773 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.773 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.773 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.773 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.773 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.773 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.773 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42262276 kB' 'MemAvailable: 46181228 kB' 'Buffers: 2704 kB' 'Cached: 11924884 kB' 'SwapCached: 0 kB' 'Active: 8800780 kB' 'Inactive: 3676316 kB' 'Active(anon): 8410916 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553116 kB' 'Mapped: 215432 kB' 'Shmem: 7861408 kB' 'KReclaimable: 500264 kB' 'Slab: 1138844 kB' 'SReclaimable: 500264 kB' 'SUnreclaim: 638580 kB' 'KernelStack: 22160 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9825416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216772 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.774 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42263008 kB' 'MemAvailable: 46181960 kB' 'Buffers: 2704 kB' 'Cached: 11924888 kB' 'SwapCached: 0 kB' 'Active: 8800388 kB' 'Inactive: 3676316 kB' 'Active(anon): 8410524 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552700 kB' 'Mapped: 215392 kB' 'Shmem: 7861412 kB' 'KReclaimable: 500264 kB' 'Slab: 1138836 kB' 'SReclaimable: 500264 kB' 'SUnreclaim: 638572 kB' 'KernelStack: 22096 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9825436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216724 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.775 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.776 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42263640 kB' 'MemAvailable: 46182592 kB' 'Buffers: 2704 kB' 'Cached: 11924904 kB' 'SwapCached: 0 kB' 'Active: 8800476 kB' 'Inactive: 3676316 kB' 'Active(anon): 8410612 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552860 kB' 'Mapped: 215392 kB' 'Shmem: 7861428 kB' 'KReclaimable: 500264 kB' 'Slab: 1138836 kB' 'SReclaimable: 500264 kB' 'SUnreclaim: 638572 kB' 'KernelStack: 22144 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9825456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216724 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.777 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.778 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:33.779 nr_hugepages=1024 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:33.779 resv_hugepages=0 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:33.779 surplus_hugepages=0 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:33.779 anon_hugepages=0 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42263640 kB' 'MemAvailable: 46182592 kB' 'Buffers: 2704 kB' 'Cached: 11924928 kB' 'SwapCached: 0 kB' 'Active: 8800120 kB' 'Inactive: 3676316 kB' 'Active(anon): 8410256 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552452 kB' 'Mapped: 215392 kB' 'Shmem: 7861452 kB' 'KReclaimable: 500264 kB' 'Slab: 1138836 kB' 'SReclaimable: 500264 kB' 'SUnreclaim: 638572 kB' 'KernelStack: 22128 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9825480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216724 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.779 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.780 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 27956724 kB' 'MemUsed: 4635360 kB' 'SwapCached: 0 kB' 'Active: 2280920 kB' 'Inactive: 274308 kB' 'Active(anon): 2121068 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 274308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2406076 kB' 'Mapped: 104644 kB' 'AnonPages: 152436 kB' 'Shmem: 1971916 kB' 'KernelStack: 12168 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159208 kB' 'Slab: 427880 kB' 'SReclaimable: 159208 kB' 'SUnreclaim: 268672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.781 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.782 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 14304656 kB' 'MemUsed: 13398452 kB' 'SwapCached: 0 kB' 'Active: 6520076 kB' 'Inactive: 3402008 kB' 'Active(anon): 6290064 kB' 'Inactive(anon): 0 kB' 'Active(file): 230012 kB' 'Inactive(file): 3402008 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9521596 kB' 'Mapped: 110748 kB' 'AnonPages: 400884 kB' 'Shmem: 5889576 kB' 'KernelStack: 9976 kB' 'PageTables: 4972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 341056 kB' 'Slab: 710956 kB' 'SReclaimable: 341056 kB' 'SUnreclaim: 369900 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.783 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:33.784 node0=512 expecting 512 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:33.784 node1=512 expecting 512 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:33.784 00:03:33.784 real 0m3.159s 00:03:33.784 user 0m1.069s 00:03:33.784 sys 0m2.018s 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:33.784 13:31:30 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:33.784 ************************************ 00:03:33.784 END TEST even_2G_alloc 00:03:33.784 ************************************ 00:03:33.784 13:31:30 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:33.784 13:31:30 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:33.784 13:31:30 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:33.784 13:31:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:33.784 ************************************ 00:03:33.784 START TEST odd_alloc 00:03:33.784 ************************************ 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.784 13:31:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:36.321 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:36.582 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:36.582 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:36.582 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:36.582 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:36.582 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:36.582 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:36.582 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:36.582 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:36.582 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:36.582 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:36.582 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:36.582 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:36.582 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:36.582 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:36.582 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:36.582 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:36.582 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:36.582 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:36.582 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.582 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.582 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:36.582 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:36.582 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:36.582 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.582 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.582 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.582 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:36.582 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.582 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.582 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.582 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.582 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.582 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.582 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.582 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.582 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.582 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42261420 kB' 'MemAvailable: 46180356 kB' 'Buffers: 2704 kB' 'Cached: 11925040 kB' 'SwapCached: 0 kB' 'Active: 8802336 kB' 'Inactive: 3676316 kB' 'Active(anon): 8412472 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553712 kB' 'Mapped: 215536 kB' 'Shmem: 7861564 kB' 'KReclaimable: 500248 kB' 'Slab: 1138856 kB' 'SReclaimable: 500248 kB' 'SUnreclaim: 638608 kB' 'KernelStack: 22160 kB' 'PageTables: 8704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 9826092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216692 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:36.582 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.583 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42263720 kB' 'MemAvailable: 46182656 kB' 'Buffers: 2704 kB' 'Cached: 11925056 kB' 'SwapCached: 0 kB' 'Active: 8801212 kB' 'Inactive: 3676316 kB' 'Active(anon): 8411348 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553020 kB' 'Mapped: 215404 kB' 'Shmem: 7861580 kB' 'KReclaimable: 500248 kB' 'Slab: 1138880 kB' 'SReclaimable: 500248 kB' 'SUnreclaim: 638632 kB' 'KernelStack: 22128 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 9826112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216660 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.584 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.585 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.586 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42263492 kB' 'MemAvailable: 46182428 kB' 'Buffers: 2704 kB' 'Cached: 11925060 kB' 'SwapCached: 0 kB' 'Active: 8801576 kB' 'Inactive: 3676316 kB' 'Active(anon): 8411712 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553392 kB' 'Mapped: 215404 kB' 'Shmem: 7861584 kB' 'KReclaimable: 500248 kB' 'Slab: 1138880 kB' 'SReclaimable: 500248 kB' 'SUnreclaim: 638632 kB' 'KernelStack: 22144 kB' 'PageTables: 8664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 9826132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216692 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.848 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.849 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:36.850 nr_hugepages=1025 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.850 resv_hugepages=0 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.850 surplus_hugepages=0 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.850 anon_hugepages=0 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42263472 kB' 'MemAvailable: 46182408 kB' 'Buffers: 2704 kB' 'Cached: 11925100 kB' 'SwapCached: 0 kB' 'Active: 8801268 kB' 'Inactive: 3676316 kB' 'Active(anon): 8411404 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553020 kB' 'Mapped: 215404 kB' 'Shmem: 7861624 kB' 'KReclaimable: 500248 kB' 'Slab: 1138880 kB' 'SReclaimable: 500248 kB' 'SUnreclaim: 638632 kB' 'KernelStack: 22128 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 9826152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216724 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.850 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.851 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 27952800 kB' 'MemUsed: 4639284 kB' 'SwapCached: 0 kB' 'Active: 2281260 kB' 'Inactive: 274308 kB' 'Active(anon): 2121408 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 274308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2406088 kB' 'Mapped: 104644 kB' 'AnonPages: 152632 kB' 'Shmem: 1971928 kB' 'KernelStack: 12168 kB' 'PageTables: 3712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159192 kB' 'Slab: 427784 kB' 'SReclaimable: 159192 kB' 'SUnreclaim: 268592 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.852 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 14310744 kB' 'MemUsed: 13392364 kB' 'SwapCached: 0 kB' 'Active: 6520056 kB' 'Inactive: 3402008 kB' 'Active(anon): 6290044 kB' 'Inactive(anon): 0 kB' 'Active(file): 230012 kB' 'Inactive(file): 3402008 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9521752 kB' 'Mapped: 110760 kB' 'AnonPages: 400396 kB' 'Shmem: 5889732 kB' 'KernelStack: 9960 kB' 'PageTables: 4900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 341056 kB' 'Slab: 711096 kB' 'SReclaimable: 341056 kB' 'SUnreclaim: 370040 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.853 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:36.854 node0=512 expecting 513 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:36.854 node1=513 expecting 512 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:36.854 00:03:36.854 real 0m3.263s 00:03:36.854 user 0m1.138s 00:03:36.854 sys 0m2.033s 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:36.854 13:31:33 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:36.854 ************************************ 00:03:36.854 END TEST odd_alloc 00:03:36.854 ************************************ 00:03:36.854 13:31:33 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:36.855 13:31:33 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:36.855 13:31:33 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:36.855 13:31:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:36.855 ************************************ 00:03:36.855 START TEST custom_alloc 00:03:36.855 ************************************ 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.855 13:31:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:40.148 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:40.148 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:40.148 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:40.148 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:40.148 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:40.148 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:40.148 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:40.149 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:40.149 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:40.149 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:40.149 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:40.149 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:40.149 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:40.149 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:40.149 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:40.149 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:40.149 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 41218284 kB' 'MemAvailable: 45137220 kB' 'Buffers: 2704 kB' 'Cached: 11925212 kB' 'SwapCached: 0 kB' 'Active: 8803248 kB' 'Inactive: 3676316 kB' 'Active(anon): 8413384 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554324 kB' 'Mapped: 215452 kB' 'Shmem: 7861736 kB' 'KReclaimable: 500248 kB' 'Slab: 1138820 kB' 'SReclaimable: 500248 kB' 'SUnreclaim: 638572 kB' 'KernelStack: 22224 kB' 'PageTables: 8540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 9829372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216628 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.149 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.150 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 41219260 kB' 'MemAvailable: 45138196 kB' 'Buffers: 2704 kB' 'Cached: 11925216 kB' 'SwapCached: 0 kB' 'Active: 8802488 kB' 'Inactive: 3676316 kB' 'Active(anon): 8412624 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554016 kB' 'Mapped: 215432 kB' 'Shmem: 7861740 kB' 'KReclaimable: 500248 kB' 'Slab: 1138816 kB' 'SReclaimable: 500248 kB' 'SUnreclaim: 638568 kB' 'KernelStack: 22192 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 9828156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216676 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.415 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.416 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 41219912 kB' 'MemAvailable: 45138848 kB' 'Buffers: 2704 kB' 'Cached: 11925228 kB' 'SwapCached: 0 kB' 'Active: 8803628 kB' 'Inactive: 3676316 kB' 'Active(anon): 8413764 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555132 kB' 'Mapped: 215432 kB' 'Shmem: 7861752 kB' 'KReclaimable: 500248 kB' 'Slab: 1138816 kB' 'SReclaimable: 500248 kB' 'SUnreclaim: 638568 kB' 'KernelStack: 22192 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 9844664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216660 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.417 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.418 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:40.419 nr_hugepages=1536 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:40.419 resv_hugepages=0 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:40.419 surplus_hugepages=0 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:40.419 anon_hugepages=0 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 41217280 kB' 'MemAvailable: 45136216 kB' 'Buffers: 2704 kB' 'Cached: 11925252 kB' 'SwapCached: 0 kB' 'Active: 8804156 kB' 'Inactive: 3676316 kB' 'Active(anon): 8414292 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555636 kB' 'Mapped: 215944 kB' 'Shmem: 7861776 kB' 'KReclaimable: 500248 kB' 'Slab: 1138816 kB' 'SReclaimable: 500248 kB' 'SUnreclaim: 638568 kB' 'KernelStack: 22384 kB' 'PageTables: 9076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 9830520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216708 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.419 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.420 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 27955400 kB' 'MemUsed: 4636684 kB' 'SwapCached: 0 kB' 'Active: 2281592 kB' 'Inactive: 274308 kB' 'Active(anon): 2121740 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 274308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2406168 kB' 'Mapped: 104648 kB' 'AnonPages: 152988 kB' 'Shmem: 1972008 kB' 'KernelStack: 12232 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159192 kB' 'Slab: 427596 kB' 'SReclaimable: 159192 kB' 'SUnreclaim: 268404 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.421 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.422 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 13263328 kB' 'MemUsed: 14439780 kB' 'SwapCached: 0 kB' 'Active: 6521292 kB' 'Inactive: 3402008 kB' 'Active(anon): 6291280 kB' 'Inactive(anon): 0 kB' 'Active(file): 230012 kB' 'Inactive(file): 3402008 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9521812 kB' 'Mapped: 111224 kB' 'AnonPages: 401532 kB' 'Shmem: 5889792 kB' 'KernelStack: 9960 kB' 'PageTables: 4956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 341056 kB' 'Slab: 711196 kB' 'SReclaimable: 341056 kB' 'SUnreclaim: 370140 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.423 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:40.424 node0=512 expecting 512 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:40.424 node1=1024 expecting 1024 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:40.424 00:03:40.424 real 0m3.527s 00:03:40.424 user 0m1.366s 00:03:40.424 sys 0m2.229s 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:40.424 13:31:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:40.425 ************************************ 00:03:40.425 END TEST custom_alloc 00:03:40.425 ************************************ 00:03:40.425 13:31:37 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:40.425 13:31:37 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:40.425 13:31:37 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:40.425 13:31:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:40.684 ************************************ 00:03:40.684 START TEST no_shrink_alloc 00:03:40.684 ************************************ 00:03:40.684 13:31:37 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:03:40.684 13:31:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:40.684 13:31:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:40.684 13:31:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:40.684 13:31:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:40.684 13:31:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:40.684 13:31:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:40.684 13:31:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:40.684 13:31:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:40.684 13:31:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:40.684 13:31:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:40.684 13:31:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:40.684 13:31:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:40.684 13:31:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:40.684 13:31:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:40.684 13:31:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:40.684 13:31:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:40.684 13:31:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:40.684 13:31:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:40.684 13:31:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:40.684 13:31:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:40.684 13:31:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.685 13:31:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:43.220 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:43.220 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:43.220 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:43.220 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:43.220 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:43.220 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:43.220 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:43.220 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:43.220 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:43.220 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.220 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:43.220 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.220 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:43.220 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.220 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.220 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.220 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.220 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.220 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.220 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.220 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.220 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.221 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42279224 kB' 'MemAvailable: 46198160 kB' 'Buffers: 2704 kB' 'Cached: 11925368 kB' 'SwapCached: 0 kB' 'Active: 8804008 kB' 'Inactive: 3676316 kB' 'Active(anon): 8414144 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556184 kB' 'Mapped: 215536 kB' 'Shmem: 7861892 kB' 'KReclaimable: 500248 kB' 'Slab: 1138584 kB' 'SReclaimable: 500248 kB' 'SUnreclaim: 638336 kB' 'KernelStack: 22272 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9830748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216708 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:43.221 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.221 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.221 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.221 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.221 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.221 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.221 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.221 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.221 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.221 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.221 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.221 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.221 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.221 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.221 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.221 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.221 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.221 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.221 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.221 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.221 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.221 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.484 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.484 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.484 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.484 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.484 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.484 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.484 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.484 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.484 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.484 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.484 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.484 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.484 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.484 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.484 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.484 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.484 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.484 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.484 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.485 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42279200 kB' 'MemAvailable: 46198136 kB' 'Buffers: 2704 kB' 'Cached: 11925372 kB' 'SwapCached: 0 kB' 'Active: 8804432 kB' 'Inactive: 3676316 kB' 'Active(anon): 8414568 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556472 kB' 'Mapped: 215472 kB' 'Shmem: 7861896 kB' 'KReclaimable: 500248 kB' 'Slab: 1138628 kB' 'SReclaimable: 500248 kB' 'SUnreclaim: 638380 kB' 'KernelStack: 22272 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9830764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216740 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.486 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.487 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42280200 kB' 'MemAvailable: 46199136 kB' 'Buffers: 2704 kB' 'Cached: 11925372 kB' 'SwapCached: 0 kB' 'Active: 8804224 kB' 'Inactive: 3676316 kB' 'Active(anon): 8414360 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555760 kB' 'Mapped: 215472 kB' 'Shmem: 7861896 kB' 'KReclaimable: 500248 kB' 'Slab: 1138628 kB' 'SReclaimable: 500248 kB' 'SUnreclaim: 638380 kB' 'KernelStack: 22208 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9829184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216740 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.488 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.489 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:43.490 nr_hugepages=1024 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:43.490 resv_hugepages=0 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:43.490 surplus_hugepages=0 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:43.490 anon_hugepages=0 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42279008 kB' 'MemAvailable: 46197944 kB' 'Buffers: 2704 kB' 'Cached: 11925376 kB' 'SwapCached: 0 kB' 'Active: 8804020 kB' 'Inactive: 3676316 kB' 'Active(anon): 8414156 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556048 kB' 'Mapped: 215480 kB' 'Shmem: 7861900 kB' 'KReclaimable: 500248 kB' 'Slab: 1138628 kB' 'SReclaimable: 500248 kB' 'SUnreclaim: 638380 kB' 'KernelStack: 22208 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9830808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216788 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.490 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.491 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 26909260 kB' 'MemUsed: 5682824 kB' 'SwapCached: 0 kB' 'Active: 2281740 kB' 'Inactive: 274308 kB' 'Active(anon): 2121888 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 274308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2406288 kB' 'Mapped: 104648 kB' 'AnonPages: 153052 kB' 'Shmem: 1972128 kB' 'KernelStack: 12280 kB' 'PageTables: 3564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159192 kB' 'Slab: 427316 kB' 'SReclaimable: 159192 kB' 'SUnreclaim: 268124 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.492 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.493 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.494 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.494 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.494 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.494 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.494 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.494 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.494 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.494 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.494 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.494 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.494 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.494 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.494 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.494 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.494 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:43.494 node0=1024 expecting 1024 00:03:43.494 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:43.494 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:43.494 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:43.494 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:43.494 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.494 13:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:46.785 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:46.785 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:46.785 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:46.785 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:46.785 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:46.785 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:46.785 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:46.785 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:46.785 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:46.785 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:46.785 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:46.785 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:46.785 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:46.785 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:46.785 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:46.785 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:46.785 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:46.785 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42280788 kB' 'MemAvailable: 46199692 kB' 'Buffers: 2704 kB' 'Cached: 11925512 kB' 'SwapCached: 0 kB' 'Active: 8804112 kB' 'Inactive: 3676316 kB' 'Active(anon): 8414248 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555072 kB' 'Mapped: 215572 kB' 'Shmem: 7862036 kB' 'KReclaimable: 500216 kB' 'Slab: 1138300 kB' 'SReclaimable: 500216 kB' 'SUnreclaim: 638084 kB' 'KernelStack: 22192 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9828576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216644 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.785 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.786 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42281208 kB' 'MemAvailable: 46200112 kB' 'Buffers: 2704 kB' 'Cached: 11925516 kB' 'SwapCached: 0 kB' 'Active: 8803504 kB' 'Inactive: 3676316 kB' 'Active(anon): 8413640 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555024 kB' 'Mapped: 215448 kB' 'Shmem: 7862040 kB' 'KReclaimable: 500216 kB' 'Slab: 1138288 kB' 'SReclaimable: 500216 kB' 'SUnreclaim: 638072 kB' 'KernelStack: 22176 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9829828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216596 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.787 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42282188 kB' 'MemAvailable: 46201092 kB' 'Buffers: 2704 kB' 'Cached: 11925536 kB' 'SwapCached: 0 kB' 'Active: 8803896 kB' 'Inactive: 3676316 kB' 'Active(anon): 8414032 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555340 kB' 'Mapped: 215448 kB' 'Shmem: 7862060 kB' 'KReclaimable: 500216 kB' 'Slab: 1138288 kB' 'SReclaimable: 500216 kB' 'SUnreclaim: 638072 kB' 'KernelStack: 22096 kB' 'PageTables: 8440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9830220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216628 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:46.791 nr_hugepages=1024 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.791 resv_hugepages=0 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.791 surplus_hugepages=0 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.791 anon_hugepages=0 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42282376 kB' 'MemAvailable: 46201280 kB' 'Buffers: 2704 kB' 'Cached: 11925552 kB' 'SwapCached: 0 kB' 'Active: 8803856 kB' 'Inactive: 3676316 kB' 'Active(anon): 8413992 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676316 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555184 kB' 'Mapped: 215448 kB' 'Shmem: 7862076 kB' 'KReclaimable: 500216 kB' 'Slab: 1138288 kB' 'SReclaimable: 500216 kB' 'SUnreclaim: 638072 kB' 'KernelStack: 22192 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 9831476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216724 kB' 'VmallocChunk: 0 kB' 'Percpu: 97664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3145076 kB' 'DirectMap2M: 15415296 kB' 'DirectMap1G: 50331648 kB' 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.791 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.792 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 26899256 kB' 'MemUsed: 5692828 kB' 'SwapCached: 0 kB' 'Active: 2281512 kB' 'Inactive: 274308 kB' 'Active(anon): 2121660 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 274308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2406392 kB' 'Mapped: 104648 kB' 'AnonPages: 152544 kB' 'Shmem: 1972232 kB' 'KernelStack: 12168 kB' 'PageTables: 3664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159192 kB' 'Slab: 427252 kB' 'SReclaimable: 159192 kB' 'SUnreclaim: 268060 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.793 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.794 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.795 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.795 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.795 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.795 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.795 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.795 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.795 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.795 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.795 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.795 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.795 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.795 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.795 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.795 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.795 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:46.795 node0=1024 expecting 1024 00:03:46.795 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:46.795 00:03:46.795 real 0m5.963s 00:03:46.795 user 0m2.007s 00:03:46.795 sys 0m3.917s 00:03:46.795 13:31:43 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.795 13:31:43 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:46.795 ************************************ 00:03:46.795 END TEST no_shrink_alloc 00:03:46.795 ************************************ 00:03:46.795 13:31:43 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:46.795 13:31:43 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:46.795 13:31:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:46.795 13:31:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.795 13:31:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.795 13:31:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.795 13:31:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.795 13:31:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:46.795 13:31:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.795 13:31:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.795 13:31:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.795 13:31:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.795 13:31:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:46.795 13:31:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:46.795 00:03:46.795 real 0m25.255s 00:03:46.795 user 0m8.493s 00:03:46.795 sys 0m15.250s 00:03:46.795 13:31:43 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.795 13:31:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.795 ************************************ 00:03:46.795 END TEST hugepages 00:03:46.795 ************************************ 00:03:46.795 13:31:43 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:46.795 13:31:43 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:46.795 13:31:43 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:46.795 13:31:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:46.795 ************************************ 00:03:46.795 START TEST driver 00:03:46.795 ************************************ 00:03:46.795 13:31:43 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:46.795 * Looking for test storage... 00:03:46.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:46.795 13:31:43 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:46.795 13:31:43 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:46.795 13:31:43 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:52.123 13:31:47 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:52.123 13:31:47 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:52.123 13:31:47 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:52.123 13:31:47 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:52.123 ************************************ 00:03:52.123 START TEST guess_driver 00:03:52.123 ************************************ 00:03:52.123 13:31:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:03:52.123 13:31:47 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:52.123 13:31:47 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:52.123 13:31:47 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:52.123 13:31:47 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:52.123 13:31:47 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:52.123 13:31:47 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:52.123 13:31:47 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:52.123 13:31:47 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:52.123 13:31:47 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:52.123 13:31:47 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:03:52.123 13:31:47 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:52.123 13:31:47 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:52.123 13:31:47 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:52.123 13:31:47 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:52.123 13:31:47 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:52.123 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:52.123 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:52.123 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:52.123 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:52.123 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:52.123 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:52.123 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:52.123 13:31:47 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:52.123 13:31:47 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:52.123 13:31:47 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:52.123 13:31:47 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:52.123 13:31:47 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:52.123 Looking for driver=vfio-pci 00:03:52.123 13:31:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.123 13:31:47 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:52.123 13:31:47 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.123 13:31:47 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.661 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:56.040 13:31:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:56.040 13:31:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:56.040 13:31:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:56.040 13:31:52 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:56.040 13:31:52 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:56.040 13:31:52 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:56.040 13:31:52 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:01.316 00:04:01.316 real 0m9.678s 00:04:01.316 user 0m2.631s 00:04:01.316 sys 0m4.845s 00:04:01.316 13:31:57 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.316 13:31:57 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:01.316 ************************************ 00:04:01.316 END TEST guess_driver 00:04:01.316 ************************************ 00:04:01.316 00:04:01.316 real 0m14.295s 00:04:01.316 user 0m3.884s 00:04:01.316 sys 0m7.395s 00:04:01.316 13:31:57 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.316 13:31:57 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:01.316 ************************************ 00:04:01.316 END TEST driver 00:04:01.316 ************************************ 00:04:01.316 13:31:57 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:01.316 13:31:57 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.316 13:31:57 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.316 13:31:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:01.316 ************************************ 00:04:01.316 START TEST devices 00:04:01.316 ************************************ 00:04:01.316 13:31:57 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:01.316 * Looking for test storage... 00:04:01.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:01.316 13:31:57 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:01.316 13:31:57 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:01.316 13:31:57 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:01.317 13:31:57 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:04.601 13:32:01 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:04.602 13:32:01 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:04.602 13:32:01 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:04.602 13:32:01 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:04.602 13:32:01 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:04.602 13:32:01 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:04.602 13:32:01 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:04.602 13:32:01 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:04.602 13:32:01 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:04.602 13:32:01 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:04.602 13:32:01 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:04.602 13:32:01 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:04.602 13:32:01 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:04.602 13:32:01 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:04.602 13:32:01 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:04.602 13:32:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:04.602 13:32:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:04.602 13:32:01 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:04.602 13:32:01 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:04.602 13:32:01 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:04.602 13:32:01 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:04.602 13:32:01 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:04.602 No valid GPT data, bailing 00:04:04.602 13:32:01 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:04.602 13:32:01 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:04.602 13:32:01 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:04.602 13:32:01 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:04.602 13:32:01 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:04.602 13:32:01 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:04.602 13:32:01 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:04:04.602 13:32:01 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:04:04.602 13:32:01 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:04.602 13:32:01 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:04.602 13:32:01 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:04.602 13:32:01 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:04.602 13:32:01 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:04.602 13:32:01 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.602 13:32:01 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.602 13:32:01 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:04.602 ************************************ 00:04:04.602 START TEST nvme_mount 00:04:04.602 ************************************ 00:04:04.602 13:32:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:04.602 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:04.602 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:04.602 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.602 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:04.602 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:04.602 13:32:01 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:04.602 13:32:01 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:04.602 13:32:01 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:04.602 13:32:01 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:04.602 13:32:01 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:04.602 13:32:01 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:04.602 13:32:01 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:04.602 13:32:01 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:04.602 13:32:01 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:04.602 13:32:01 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:04.602 13:32:01 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:04.602 13:32:01 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:04.602 13:32:01 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:04.602 13:32:01 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:05.539 Creating new GPT entries in memory. 00:04:05.539 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:05.539 other utilities. 00:04:05.539 13:32:02 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:05.539 13:32:02 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:05.539 13:32:02 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:05.539 13:32:02 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:05.539 13:32:02 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:06.475 Creating new GPT entries in memory. 00:04:06.475 The operation has completed successfully. 00:04:06.475 13:32:03 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:06.475 13:32:03 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:06.475 13:32:03 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 58832 00:04:06.475 13:32:03 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.475 13:32:03 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:06.475 13:32:03 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.475 13:32:03 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:06.475 13:32:03 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:06.475 13:32:03 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.475 13:32:03 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:06.475 13:32:03 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:06.475 13:32:03 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:06.475 13:32:03 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.475 13:32:03 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:06.475 13:32:03 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:06.475 13:32:03 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:06.475 13:32:03 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:06.475 13:32:03 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:06.475 13:32:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.475 13:32:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:06.475 13:32:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:06.475 13:32:03 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.475 13:32:03 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.778 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.779 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.779 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.779 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.779 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:09.779 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:09.779 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.779 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:09.779 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:09.779 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.779 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:09.779 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:09.779 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:09.779 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.779 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.779 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:09.779 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:09.779 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:09.779 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:09.779 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:10.039 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:10.039 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:10.039 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:10.039 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:10.039 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:10.039 13:32:06 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:10.039 13:32:06 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.039 13:32:06 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:10.039 13:32:06 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:10.039 13:32:06 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.039 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:10.039 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:10.039 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:10.039 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.039 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:10.039 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:10.039 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:10.039 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:10.039 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:10.039 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.039 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:10.039 13:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:10.039 13:32:06 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.039 13:32:06 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.327 13:32:09 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.899 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.900 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.900 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.900 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.158 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:16.159 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:16.159 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:16.159 13:32:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.418 13:32:13 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:16.418 13:32:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:16.418 13:32:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:16.418 13:32:13 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:16.418 13:32:13 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.418 13:32:13 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:16.418 13:32:13 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:16.418 13:32:13 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:16.418 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:16.418 00:04:16.418 real 0m11.869s 00:04:16.418 user 0m3.360s 00:04:16.418 sys 0m6.320s 00:04:16.418 13:32:13 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:16.418 13:32:13 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:16.418 ************************************ 00:04:16.418 END TEST nvme_mount 00:04:16.418 ************************************ 00:04:16.418 13:32:13 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:16.418 13:32:13 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.418 13:32:13 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.418 13:32:13 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:16.418 ************************************ 00:04:16.418 START TEST dm_mount 00:04:16.418 ************************************ 00:04:16.418 13:32:13 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:16.418 13:32:13 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:16.418 13:32:13 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:16.418 13:32:13 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:16.418 13:32:13 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:16.418 13:32:13 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:16.418 13:32:13 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:16.418 13:32:13 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:16.418 13:32:13 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:16.418 13:32:13 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:16.418 13:32:13 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:16.418 13:32:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:16.418 13:32:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.418 13:32:13 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:16.418 13:32:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:16.418 13:32:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.418 13:32:13 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:16.418 13:32:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:16.418 13:32:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.418 13:32:13 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:16.418 13:32:13 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:16.418 13:32:13 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:17.355 Creating new GPT entries in memory. 00:04:17.355 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:17.355 other utilities. 00:04:17.355 13:32:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:17.355 13:32:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:17.355 13:32:14 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:17.355 13:32:14 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:17.355 13:32:14 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:18.732 Creating new GPT entries in memory. 00:04:18.732 The operation has completed successfully. 00:04:18.732 13:32:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:18.732 13:32:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.732 13:32:15 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:18.732 13:32:15 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:18.732 13:32:15 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:19.669 The operation has completed successfully. 00:04:19.669 13:32:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:19.669 13:32:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:19.669 13:32:16 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 63157 00:04:19.669 13:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:19.669 13:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:19.669 13:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:19.669 13:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:19.669 13:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:19.669 13:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:19.669 13:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:19.669 13:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:19.669 13:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:19.669 13:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:19.669 13:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:19.669 13:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:19.669 13:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:19.669 13:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:19.669 13:32:16 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:19.669 13:32:16 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:19.669 13:32:16 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:19.669 13:32:16 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:19.669 13:32:16 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:19.669 13:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:19.669 13:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:19.669 13:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:19.670 13:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:19.670 13:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:19.670 13:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:19.670 13:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:19.670 13:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:19.670 13:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:19.670 13:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.670 13:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:19.670 13:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:19.670 13:32:16 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.670 13:32:16 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.959 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.960 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.960 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.960 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.960 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.960 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.960 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:22.960 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:22.960 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.960 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:22.960 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:22.960 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:22.960 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:22.960 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:22.960 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:22.960 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:22.960 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:22.960 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:22.960 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:22.960 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:22.960 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:22.960 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:22.960 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:22.960 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.960 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:22.960 13:32:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:22.960 13:32:19 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.960 13:32:19 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:25.495 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.495 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.495 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.495 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.495 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.495 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.495 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.495 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.495 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.495 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.495 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.495 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.495 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.495 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.495 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.495 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.495 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.495 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.495 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.495 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.495 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.496 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.496 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.496 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.496 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.496 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.496 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.496 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.496 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.496 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.496 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.496 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.496 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.496 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:25.496 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:25.496 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.496 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:25.496 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:25.496 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:25.496 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:25.496 13:32:21 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:25.496 13:32:22 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:25.496 13:32:22 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:25.496 13:32:22 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:25.496 13:32:22 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:25.496 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:25.496 13:32:22 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:25.496 13:32:22 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:25.496 00:04:25.496 real 0m8.922s 00:04:25.496 user 0m1.849s 00:04:25.496 sys 0m3.878s 00:04:25.496 13:32:22 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.496 13:32:22 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:25.496 ************************************ 00:04:25.496 END TEST dm_mount 00:04:25.496 ************************************ 00:04:25.496 13:32:22 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:25.496 13:32:22 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:25.496 13:32:22 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.496 13:32:22 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:25.496 13:32:22 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:25.496 13:32:22 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:25.496 13:32:22 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:25.755 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:25.755 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:25.755 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:25.755 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:25.755 13:32:22 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:25.755 13:32:22 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:25.755 13:32:22 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:25.755 13:32:22 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:25.755 13:32:22 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:25.755 13:32:22 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:25.755 13:32:22 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:25.755 00:04:25.755 real 0m24.628s 00:04:25.755 user 0m6.363s 00:04:25.755 sys 0m12.717s 00:04:25.755 13:32:22 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.755 13:32:22 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:25.755 ************************************ 00:04:25.755 END TEST devices 00:04:25.755 ************************************ 00:04:25.755 00:04:25.755 real 1m26.986s 00:04:25.755 user 0m25.578s 00:04:25.755 sys 0m49.241s 00:04:25.755 13:32:22 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.755 13:32:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:25.755 ************************************ 00:04:25.755 END TEST setup.sh 00:04:25.755 ************************************ 00:04:25.755 13:32:22 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:29.111 Hugepages 00:04:29.111 node hugesize free / total 00:04:29.111 node0 1048576kB 0 / 0 00:04:29.111 node0 2048kB 2048 / 2048 00:04:29.111 node1 1048576kB 0 / 0 00:04:29.111 node1 2048kB 0 / 0 00:04:29.111 00:04:29.111 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:29.111 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:29.111 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:29.111 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:29.111 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:29.111 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:29.111 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:29.111 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:29.111 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:29.111 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:29.111 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:29.111 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:29.111 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:29.111 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:29.111 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:29.111 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:29.111 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:29.111 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:29.111 13:32:25 -- spdk/autotest.sh@130 -- # uname -s 00:04:29.111 13:32:25 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:29.111 13:32:25 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:29.111 13:32:25 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:32.401 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:32.401 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:32.401 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:32.401 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:32.401 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:32.401 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:32.401 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:32.401 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:32.401 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:32.401 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:32.401 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:32.401 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:32.401 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:32.401 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:32.401 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:32.401 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:33.780 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:34.038 13:32:30 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:34.989 13:32:31 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:34.989 13:32:31 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:34.989 13:32:31 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:34.989 13:32:31 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:34.989 13:32:31 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:34.989 13:32:31 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:34.989 13:32:31 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:34.989 13:32:31 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:34.989 13:32:31 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:34.989 13:32:31 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:34.990 13:32:31 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:04:34.990 13:32:31 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:38.313 Waiting for block devices as requested 00:04:38.313 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:38.313 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:38.572 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:38.572 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:38.572 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:38.572 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:38.831 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:38.831 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:38.831 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:39.090 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:39.090 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:39.090 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:39.349 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:39.349 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:39.349 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:39.608 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:39.608 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:39.868 13:32:36 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:39.868 13:32:36 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:39.868 13:32:36 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:39.868 13:32:36 -- common/autotest_common.sh@1502 -- # grep 0000:d8:00.0/nvme/nvme 00:04:39.868 13:32:36 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:39.868 13:32:36 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:39.868 13:32:36 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:39.868 13:32:36 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:39.868 13:32:36 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:39.868 13:32:36 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:39.868 13:32:36 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:39.868 13:32:36 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:39.868 13:32:36 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:39.868 13:32:36 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:39.868 13:32:36 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:39.868 13:32:36 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:39.868 13:32:36 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:39.868 13:32:36 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:39.868 13:32:36 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:39.868 13:32:36 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:39.868 13:32:36 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:39.868 13:32:36 -- common/autotest_common.sh@1557 -- # continue 00:04:39.868 13:32:36 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:39.868 13:32:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:39.868 13:32:36 -- common/autotest_common.sh@10 -- # set +x 00:04:39.868 13:32:36 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:39.868 13:32:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:39.868 13:32:36 -- common/autotest_common.sh@10 -- # set +x 00:04:39.868 13:32:36 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:42.405 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:42.405 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:42.405 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:42.405 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:42.405 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:42.664 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:42.664 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:42.664 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:42.664 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:42.664 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:42.664 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:42.664 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:42.664 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:42.664 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:42.664 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:42.664 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:44.572 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:44.572 13:32:41 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:44.572 13:32:41 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:44.572 13:32:41 -- common/autotest_common.sh@10 -- # set +x 00:04:44.572 13:32:41 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:44.572 13:32:41 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:44.572 13:32:41 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:44.572 13:32:41 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:44.572 13:32:41 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:44.572 13:32:41 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:44.572 13:32:41 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:44.572 13:32:41 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:44.572 13:32:41 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:44.572 13:32:41 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:44.572 13:32:41 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:44.572 13:32:41 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:44.572 13:32:41 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:04:44.572 13:32:41 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:44.572 13:32:41 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:44.572 13:32:41 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:44.572 13:32:41 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:44.572 13:32:41 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:44.572 13:32:41 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:d8:00.0 00:04:44.572 13:32:41 -- common/autotest_common.sh@1592 -- # [[ -z 0000:d8:00.0 ]] 00:04:44.572 13:32:41 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=72446 00:04:44.572 13:32:41 -- common/autotest_common.sh@1598 -- # waitforlisten 72446 00:04:44.572 13:32:41 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:44.572 13:32:41 -- common/autotest_common.sh@831 -- # '[' -z 72446 ']' 00:04:44.572 13:32:41 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.572 13:32:41 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:44.572 13:32:41 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.572 13:32:41 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:44.572 13:32:41 -- common/autotest_common.sh@10 -- # set +x 00:04:44.572 [2024-07-25 13:32:41.326340] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:04:44.572 [2024-07-25 13:32:41.326395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72446 ] 00:04:44.572 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.572 [2024-07-25 13:32:41.362922] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:44.572 [2024-07-25 13:32:41.397560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.572 [2024-07-25 13:32:41.437385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.509 13:32:42 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:45.509 13:32:42 -- common/autotest_common.sh@864 -- # return 0 00:04:45.509 13:32:42 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:45.509 13:32:42 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:45.509 13:32:42 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:48.798 nvme0n1 00:04:48.798 13:32:45 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:48.798 [2024-07-25 13:32:45.255031] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:48.798 request: 00:04:48.798 { 00:04:48.798 "nvme_ctrlr_name": "nvme0", 00:04:48.798 "password": "test", 00:04:48.798 "method": "bdev_nvme_opal_revert", 00:04:48.798 "req_id": 1 00:04:48.798 } 00:04:48.798 Got JSON-RPC error response 00:04:48.798 response: 00:04:48.798 { 00:04:48.798 "code": -32602, 00:04:48.798 "message": "Invalid parameters" 00:04:48.798 } 00:04:48.798 13:32:45 -- common/autotest_common.sh@1604 -- # true 00:04:48.798 13:32:45 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:48.798 13:32:45 -- common/autotest_common.sh@1608 -- # killprocess 72446 00:04:48.798 13:32:45 -- common/autotest_common.sh@950 -- # '[' -z 72446 ']' 00:04:48.798 13:32:45 -- common/autotest_common.sh@954 -- # kill -0 72446 00:04:48.798 13:32:45 -- common/autotest_common.sh@955 -- # uname 00:04:48.798 13:32:45 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:48.798 13:32:45 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72446 00:04:48.798 13:32:45 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:48.799 13:32:45 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:48.799 13:32:45 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72446' 00:04:48.799 killing process with pid 72446 00:04:48.799 13:32:45 -- common/autotest_common.sh@969 -- # kill 72446 00:04:48.799 13:32:45 -- common/autotest_common.sh@974 -- # wait 72446 00:04:50.703 13:32:47 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:50.703 13:32:47 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:50.703 13:32:47 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:50.703 13:32:47 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:50.703 13:32:47 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:50.703 13:32:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:50.703 13:32:47 -- common/autotest_common.sh@10 -- # set +x 00:04:50.703 13:32:47 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:50.703 13:32:47 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:50.703 13:32:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.703 13:32:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.703 13:32:47 -- common/autotest_common.sh@10 -- # set +x 00:04:50.703 ************************************ 00:04:50.703 START TEST env 00:04:50.703 ************************************ 00:04:50.703 13:32:47 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:50.703 * Looking for test storage... 00:04:50.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:50.703 13:32:47 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:50.703 13:32:47 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.703 13:32:47 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.703 13:32:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:50.962 ************************************ 00:04:50.962 START TEST env_memory 00:04:50.962 ************************************ 00:04:50.962 13:32:47 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:50.962 00:04:50.962 00:04:50.962 CUnit - A unit testing framework for C - Version 2.1-3 00:04:50.962 http://cunit.sourceforge.net/ 00:04:50.962 00:04:50.962 00:04:50.962 Suite: memory 00:04:50.962 Test: alloc and free memory map ...[2024-07-25 13:32:47.641832] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:50.962 passed 00:04:50.962 Test: mem map translation ...[2024-07-25 13:32:47.660712] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:50.962 [2024-07-25 13:32:47.660734] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:50.962 [2024-07-25 13:32:47.660769] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:50.962 [2024-07-25 13:32:47.660778] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:50.962 passed 00:04:50.962 Test: mem map registration ...[2024-07-25 13:32:47.697557] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:50.962 [2024-07-25 13:32:47.697573] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:50.962 passed 00:04:50.962 Test: mem map adjacent registrations ...passed 00:04:50.962 00:04:50.962 Run Summary: Type Total Ran Passed Failed Inactive 00:04:50.962 suites 1 1 n/a 0 0 00:04:50.962 tests 4 4 4 0 0 00:04:50.962 asserts 152 152 152 0 n/a 00:04:50.962 00:04:50.962 Elapsed time = 0.128 seconds 00:04:50.962 00:04:50.962 real 0m0.138s 00:04:50.962 user 0m0.126s 00:04:50.963 sys 0m0.012s 00:04:50.963 13:32:47 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.963 13:32:47 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:50.963 ************************************ 00:04:50.963 END TEST env_memory 00:04:50.963 ************************************ 00:04:50.963 13:32:47 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:50.963 13:32:47 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.963 13:32:47 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.963 13:32:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:50.963 ************************************ 00:04:50.963 START TEST env_vtophys 00:04:50.963 ************************************ 00:04:50.963 13:32:47 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:50.963 EAL: lib.eal log level changed from notice to debug 00:04:50.963 EAL: Detected lcore 0 as core 0 on socket 0 00:04:50.963 EAL: Detected lcore 1 as core 1 on socket 0 00:04:50.963 EAL: Detected lcore 2 as core 2 on socket 0 00:04:50.963 EAL: Detected lcore 3 as core 3 on socket 0 00:04:50.963 EAL: Detected lcore 4 as core 4 on socket 0 00:04:50.963 EAL: Detected lcore 5 as core 5 on socket 0 00:04:50.963 EAL: Detected lcore 6 as core 6 on socket 0 00:04:50.963 EAL: Detected lcore 7 as core 8 on socket 0 00:04:50.963 EAL: Detected lcore 8 as core 9 on socket 0 00:04:50.963 EAL: Detected lcore 9 as core 10 on socket 0 00:04:50.963 EAL: Detected lcore 10 as core 11 on socket 0 00:04:50.963 EAL: Detected lcore 11 as core 12 on socket 0 00:04:50.963 EAL: Detected lcore 12 as core 13 on socket 0 00:04:50.963 EAL: Detected lcore 13 as core 14 on socket 0 00:04:50.963 EAL: Detected lcore 14 as core 16 on socket 0 00:04:50.963 EAL: Detected lcore 15 as core 17 on socket 0 00:04:50.963 EAL: Detected lcore 16 as core 18 on socket 0 00:04:50.963 EAL: Detected lcore 17 as core 19 on socket 0 00:04:50.963 EAL: Detected lcore 18 as core 20 on socket 0 00:04:50.963 EAL: Detected lcore 19 as core 21 on socket 0 00:04:50.963 EAL: Detected lcore 20 as core 22 on socket 0 00:04:50.963 EAL: Detected lcore 21 as core 24 on socket 0 00:04:50.963 EAL: Detected lcore 22 as core 25 on socket 0 00:04:50.963 EAL: Detected lcore 23 as core 26 on socket 0 00:04:50.963 EAL: Detected lcore 24 as core 27 on socket 0 00:04:50.963 EAL: Detected lcore 25 as core 28 on socket 0 00:04:50.963 EAL: Detected lcore 26 as core 29 on socket 0 00:04:50.963 EAL: Detected lcore 27 as core 30 on socket 0 00:04:50.963 EAL: Detected lcore 28 as core 0 on socket 1 00:04:50.963 EAL: Detected lcore 29 as core 1 on socket 1 00:04:50.963 EAL: Detected lcore 30 as core 2 on socket 1 00:04:50.963 EAL: Detected lcore 31 as core 3 on socket 1 00:04:50.963 EAL: Detected lcore 32 as core 4 on socket 1 00:04:50.963 EAL: Detected lcore 33 as core 5 on socket 1 00:04:50.963 EAL: Detected lcore 34 as core 6 on socket 1 00:04:50.963 EAL: Detected lcore 35 as core 8 on socket 1 00:04:50.963 EAL: Detected lcore 36 as core 9 on socket 1 00:04:50.963 EAL: Detected lcore 37 as core 10 on socket 1 00:04:50.963 EAL: Detected lcore 38 as core 11 on socket 1 00:04:50.963 EAL: Detected lcore 39 as core 12 on socket 1 00:04:50.963 EAL: Detected lcore 40 as core 13 on socket 1 00:04:50.963 EAL: Detected lcore 41 as core 14 on socket 1 00:04:50.963 EAL: Detected lcore 42 as core 16 on socket 1 00:04:50.963 EAL: Detected lcore 43 as core 17 on socket 1 00:04:50.963 EAL: Detected lcore 44 as core 18 on socket 1 00:04:50.963 EAL: Detected lcore 45 as core 19 on socket 1 00:04:50.963 EAL: Detected lcore 46 as core 20 on socket 1 00:04:50.963 EAL: Detected lcore 47 as core 21 on socket 1 00:04:50.963 EAL: Detected lcore 48 as core 22 on socket 1 00:04:50.963 EAL: Detected lcore 49 as core 24 on socket 1 00:04:50.963 EAL: Detected lcore 50 as core 25 on socket 1 00:04:50.963 EAL: Detected lcore 51 as core 26 on socket 1 00:04:50.963 EAL: Detected lcore 52 as core 27 on socket 1 00:04:50.963 EAL: Detected lcore 53 as core 28 on socket 1 00:04:50.963 EAL: Detected lcore 54 as core 29 on socket 1 00:04:50.963 EAL: Detected lcore 55 as core 30 on socket 1 00:04:50.963 EAL: Detected lcore 56 as core 0 on socket 0 00:04:50.963 EAL: Detected lcore 57 as core 1 on socket 0 00:04:50.963 EAL: Detected lcore 58 as core 2 on socket 0 00:04:50.963 EAL: Detected lcore 59 as core 3 on socket 0 00:04:50.963 EAL: Detected lcore 60 as core 4 on socket 0 00:04:50.963 EAL: Detected lcore 61 as core 5 on socket 0 00:04:50.963 EAL: Detected lcore 62 as core 6 on socket 0 00:04:50.963 EAL: Detected lcore 63 as core 8 on socket 0 00:04:50.963 EAL: Detected lcore 64 as core 9 on socket 0 00:04:50.963 EAL: Detected lcore 65 as core 10 on socket 0 00:04:50.963 EAL: Detected lcore 66 as core 11 on socket 0 00:04:50.963 EAL: Detected lcore 67 as core 12 on socket 0 00:04:50.963 EAL: Detected lcore 68 as core 13 on socket 0 00:04:50.963 EAL: Detected lcore 69 as core 14 on socket 0 00:04:50.963 EAL: Detected lcore 70 as core 16 on socket 0 00:04:50.963 EAL: Detected lcore 71 as core 17 on socket 0 00:04:50.963 EAL: Detected lcore 72 as core 18 on socket 0 00:04:50.963 EAL: Detected lcore 73 as core 19 on socket 0 00:04:50.963 EAL: Detected lcore 74 as core 20 on socket 0 00:04:50.963 EAL: Detected lcore 75 as core 21 on socket 0 00:04:50.963 EAL: Detected lcore 76 as core 22 on socket 0 00:04:50.963 EAL: Detected lcore 77 as core 24 on socket 0 00:04:50.963 EAL: Detected lcore 78 as core 25 on socket 0 00:04:50.963 EAL: Detected lcore 79 as core 26 on socket 0 00:04:50.963 EAL: Detected lcore 80 as core 27 on socket 0 00:04:50.963 EAL: Detected lcore 81 as core 28 on socket 0 00:04:50.963 EAL: Detected lcore 82 as core 29 on socket 0 00:04:50.963 EAL: Detected lcore 83 as core 30 on socket 0 00:04:50.963 EAL: Detected lcore 84 as core 0 on socket 1 00:04:50.963 EAL: Detected lcore 85 as core 1 on socket 1 00:04:50.963 EAL: Detected lcore 86 as core 2 on socket 1 00:04:50.963 EAL: Detected lcore 87 as core 3 on socket 1 00:04:50.963 EAL: Detected lcore 88 as core 4 on socket 1 00:04:50.963 EAL: Detected lcore 89 as core 5 on socket 1 00:04:50.963 EAL: Detected lcore 90 as core 6 on socket 1 00:04:50.963 EAL: Detected lcore 91 as core 8 on socket 1 00:04:50.963 EAL: Detected lcore 92 as core 9 on socket 1 00:04:50.963 EAL: Detected lcore 93 as core 10 on socket 1 00:04:50.963 EAL: Detected lcore 94 as core 11 on socket 1 00:04:50.963 EAL: Detected lcore 95 as core 12 on socket 1 00:04:50.963 EAL: Detected lcore 96 as core 13 on socket 1 00:04:50.963 EAL: Detected lcore 97 as core 14 on socket 1 00:04:50.963 EAL: Detected lcore 98 as core 16 on socket 1 00:04:50.963 EAL: Detected lcore 99 as core 17 on socket 1 00:04:50.963 EAL: Detected lcore 100 as core 18 on socket 1 00:04:50.963 EAL: Detected lcore 101 as core 19 on socket 1 00:04:50.963 EAL: Detected lcore 102 as core 20 on socket 1 00:04:50.963 EAL: Detected lcore 103 as core 21 on socket 1 00:04:50.963 EAL: Detected lcore 104 as core 22 on socket 1 00:04:50.963 EAL: Detected lcore 105 as core 24 on socket 1 00:04:50.963 EAL: Detected lcore 106 as core 25 on socket 1 00:04:50.963 EAL: Detected lcore 107 as core 26 on socket 1 00:04:50.963 EAL: Detected lcore 108 as core 27 on socket 1 00:04:50.963 EAL: Detected lcore 109 as core 28 on socket 1 00:04:50.963 EAL: Detected lcore 110 as core 29 on socket 1 00:04:50.963 EAL: Detected lcore 111 as core 30 on socket 1 00:04:50.963 EAL: Maximum logical cores by configuration: 128 00:04:50.963 EAL: Detected CPU lcores: 112 00:04:50.963 EAL: Detected NUMA nodes: 2 00:04:50.963 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:04:50.963 EAL: Detected shared linkage of DPDK 00:04:50.963 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:04:50.963 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:04:50.963 EAL: Registered [vdev] bus. 00:04:50.963 EAL: bus.vdev log level changed from disabled to notice 00:04:50.963 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:04:50.963 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:04:50.963 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:50.963 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:50.963 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:04:50.963 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:04:50.963 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:04:50.963 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:04:50.963 EAL: No shared files mode enabled, IPC will be disabled 00:04:51.223 EAL: No shared files mode enabled, IPC is disabled 00:04:51.223 EAL: Bus pci wants IOVA as 'DC' 00:04:51.223 EAL: Bus vdev wants IOVA as 'DC' 00:04:51.223 EAL: Buses did not request a specific IOVA mode. 00:04:51.223 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:51.223 EAL: Selected IOVA mode 'VA' 00:04:51.223 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.223 EAL: Probing VFIO support... 00:04:51.223 EAL: IOMMU type 1 (Type 1) is supported 00:04:51.223 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:51.223 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:51.223 EAL: VFIO support initialized 00:04:51.223 EAL: Ask a virtual area of 0x2e000 bytes 00:04:51.223 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:51.223 EAL: Setting up physically contiguous memory... 00:04:51.223 EAL: Setting maximum number of open files to 524288 00:04:51.223 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:51.223 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:51.223 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:51.223 EAL: Ask a virtual area of 0x61000 bytes 00:04:51.223 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:51.223 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:51.223 EAL: Ask a virtual area of 0x400000000 bytes 00:04:51.223 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:51.223 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:51.223 EAL: Ask a virtual area of 0x61000 bytes 00:04:51.223 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:51.223 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:51.223 EAL: Ask a virtual area of 0x400000000 bytes 00:04:51.223 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:51.223 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:51.223 EAL: Ask a virtual area of 0x61000 bytes 00:04:51.223 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:51.223 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:51.223 EAL: Ask a virtual area of 0x400000000 bytes 00:04:51.223 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:51.223 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:51.223 EAL: Ask a virtual area of 0x61000 bytes 00:04:51.223 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:51.223 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:51.223 EAL: Ask a virtual area of 0x400000000 bytes 00:04:51.223 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:51.223 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:51.223 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:51.223 EAL: Ask a virtual area of 0x61000 bytes 00:04:51.223 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:51.223 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:51.223 EAL: Ask a virtual area of 0x400000000 bytes 00:04:51.223 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:51.223 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:51.224 EAL: Ask a virtual area of 0x61000 bytes 00:04:51.224 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:51.224 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:51.224 EAL: Ask a virtual area of 0x400000000 bytes 00:04:51.224 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:51.224 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:51.224 EAL: Ask a virtual area of 0x61000 bytes 00:04:51.224 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:51.224 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:51.224 EAL: Ask a virtual area of 0x400000000 bytes 00:04:51.224 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:51.224 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:51.224 EAL: Ask a virtual area of 0x61000 bytes 00:04:51.224 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:51.224 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:51.224 EAL: Ask a virtual area of 0x400000000 bytes 00:04:51.224 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:51.224 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:51.224 EAL: Hugepages will be freed exactly as allocated. 00:04:51.224 EAL: No shared files mode enabled, IPC is disabled 00:04:51.224 EAL: No shared files mode enabled, IPC is disabled 00:04:51.224 EAL: TSC frequency is ~2500000 KHz 00:04:51.224 EAL: Main lcore 0 is ready (tid=7f2c0cfe9a00;cpuset=[0]) 00:04:51.224 EAL: Trying to obtain current memory policy. 00:04:51.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.224 EAL: Restoring previous memory policy: 0 00:04:51.224 EAL: request: mp_malloc_sync 00:04:51.224 EAL: No shared files mode enabled, IPC is disabled 00:04:51.224 EAL: Heap on socket 0 was expanded by 2MB 00:04:51.224 EAL: No shared files mode enabled, IPC is disabled 00:04:51.224 EAL: No shared files mode enabled, IPC is disabled 00:04:51.224 EAL: Mem event callback 'spdk:(nil)' registered 00:04:51.224 00:04:51.224 00:04:51.224 CUnit - A unit testing framework for C - Version 2.1-3 00:04:51.224 http://cunit.sourceforge.net/ 00:04:51.224 00:04:51.224 00:04:51.224 Suite: components_suite 00:04:51.224 Test: vtophys_malloc_test ...passed 00:04:51.224 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:51.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.224 EAL: Restoring previous memory policy: 4 00:04:51.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.224 EAL: request: mp_malloc_sync 00:04:51.224 EAL: No shared files mode enabled, IPC is disabled 00:04:51.224 EAL: Heap on socket 0 was expanded by 4MB 00:04:51.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.224 EAL: request: mp_malloc_sync 00:04:51.224 EAL: No shared files mode enabled, IPC is disabled 00:04:51.224 EAL: Heap on socket 0 was shrunk by 4MB 00:04:51.224 EAL: Trying to obtain current memory policy. 00:04:51.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.224 EAL: Restoring previous memory policy: 4 00:04:51.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.224 EAL: request: mp_malloc_sync 00:04:51.224 EAL: No shared files mode enabled, IPC is disabled 00:04:51.224 EAL: Heap on socket 0 was expanded by 6MB 00:04:51.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.224 EAL: request: mp_malloc_sync 00:04:51.224 EAL: No shared files mode enabled, IPC is disabled 00:04:51.224 EAL: Heap on socket 0 was shrunk by 6MB 00:04:51.224 EAL: Trying to obtain current memory policy. 00:04:51.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.224 EAL: Restoring previous memory policy: 4 00:04:51.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.224 EAL: request: mp_malloc_sync 00:04:51.224 EAL: No shared files mode enabled, IPC is disabled 00:04:51.224 EAL: Heap on socket 0 was expanded by 10MB 00:04:51.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.224 EAL: request: mp_malloc_sync 00:04:51.224 EAL: No shared files mode enabled, IPC is disabled 00:04:51.224 EAL: Heap on socket 0 was shrunk by 10MB 00:04:51.224 EAL: Trying to obtain current memory policy. 00:04:51.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.224 EAL: Restoring previous memory policy: 4 00:04:51.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.224 EAL: request: mp_malloc_sync 00:04:51.224 EAL: No shared files mode enabled, IPC is disabled 00:04:51.224 EAL: Heap on socket 0 was expanded by 18MB 00:04:51.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.224 EAL: request: mp_malloc_sync 00:04:51.224 EAL: No shared files mode enabled, IPC is disabled 00:04:51.224 EAL: Heap on socket 0 was shrunk by 18MB 00:04:51.224 EAL: Trying to obtain current memory policy. 00:04:51.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.224 EAL: Restoring previous memory policy: 4 00:04:51.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.224 EAL: request: mp_malloc_sync 00:04:51.224 EAL: No shared files mode enabled, IPC is disabled 00:04:51.224 EAL: Heap on socket 0 was expanded by 34MB 00:04:51.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.224 EAL: request: mp_malloc_sync 00:04:51.224 EAL: No shared files mode enabled, IPC is disabled 00:04:51.224 EAL: Heap on socket 0 was shrunk by 34MB 00:04:51.224 EAL: Trying to obtain current memory policy. 00:04:51.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.224 EAL: Restoring previous memory policy: 4 00:04:51.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.224 EAL: request: mp_malloc_sync 00:04:51.224 EAL: No shared files mode enabled, IPC is disabled 00:04:51.224 EAL: Heap on socket 0 was expanded by 66MB 00:04:51.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.224 EAL: request: mp_malloc_sync 00:04:51.224 EAL: No shared files mode enabled, IPC is disabled 00:04:51.224 EAL: Heap on socket 0 was shrunk by 66MB 00:04:51.224 EAL: Trying to obtain current memory policy. 00:04:51.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.224 EAL: Restoring previous memory policy: 4 00:04:51.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.224 EAL: request: mp_malloc_sync 00:04:51.224 EAL: No shared files mode enabled, IPC is disabled 00:04:51.224 EAL: Heap on socket 0 was expanded by 130MB 00:04:51.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.224 EAL: request: mp_malloc_sync 00:04:51.224 EAL: No shared files mode enabled, IPC is disabled 00:04:51.224 EAL: Heap on socket 0 was shrunk by 130MB 00:04:51.224 EAL: Trying to obtain current memory policy. 00:04:51.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.224 EAL: Restoring previous memory policy: 4 00:04:51.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.224 EAL: request: mp_malloc_sync 00:04:51.224 EAL: No shared files mode enabled, IPC is disabled 00:04:51.224 EAL: Heap on socket 0 was expanded by 258MB 00:04:51.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.483 EAL: request: mp_malloc_sync 00:04:51.483 EAL: No shared files mode enabled, IPC is disabled 00:04:51.483 EAL: Heap on socket 0 was shrunk by 258MB 00:04:51.483 EAL: Trying to obtain current memory policy. 00:04:51.483 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.483 EAL: Restoring previous memory policy: 4 00:04:51.483 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.483 EAL: request: mp_malloc_sync 00:04:51.483 EAL: No shared files mode enabled, IPC is disabled 00:04:51.483 EAL: Heap on socket 0 was expanded by 514MB 00:04:51.483 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.742 EAL: request: mp_malloc_sync 00:04:51.742 EAL: No shared files mode enabled, IPC is disabled 00:04:51.742 EAL: Heap on socket 0 was shrunk by 514MB 00:04:51.742 EAL: Trying to obtain current memory policy. 00:04:51.742 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.742 EAL: Restoring previous memory policy: 4 00:04:51.742 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.742 EAL: request: mp_malloc_sync 00:04:51.742 EAL: No shared files mode enabled, IPC is disabled 00:04:51.742 EAL: Heap on socket 0 was expanded by 1026MB 00:04:52.001 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.001 EAL: request: mp_malloc_sync 00:04:52.001 EAL: No shared files mode enabled, IPC is disabled 00:04:52.001 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:52.001 passed 00:04:52.001 00:04:52.001 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.001 suites 1 1 n/a 0 0 00:04:52.001 tests 2 2 2 0 0 00:04:52.001 asserts 497 497 497 0 n/a 00:04:52.001 00:04:52.001 Elapsed time = 0.955 seconds 00:04:52.001 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.001 EAL: request: mp_malloc_sync 00:04:52.001 EAL: No shared files mode enabled, IPC is disabled 00:04:52.001 EAL: Heap on socket 0 was shrunk by 2MB 00:04:52.001 EAL: No shared files mode enabled, IPC is disabled 00:04:52.001 EAL: No shared files mode enabled, IPC is disabled 00:04:52.001 EAL: No shared files mode enabled, IPC is disabled 00:04:52.001 00:04:52.001 real 0m1.071s 00:04:52.001 user 0m0.628s 00:04:52.001 sys 0m0.419s 00:04:52.001 13:32:48 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.001 13:32:48 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:52.001 ************************************ 00:04:52.001 END TEST env_vtophys 00:04:52.001 ************************************ 00:04:52.260 13:32:48 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:52.260 13:32:48 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.260 13:32:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.260 13:32:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:52.260 ************************************ 00:04:52.260 START TEST env_pci 00:04:52.261 ************************************ 00:04:52.261 13:32:48 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:52.261 00:04:52.261 00:04:52.261 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.261 http://cunit.sourceforge.net/ 00:04:52.261 00:04:52.261 00:04:52.261 Suite: pci 00:04:52.261 Test: pci_hook ...[2024-07-25 13:32:48.963965] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 73854 has claimed it 00:04:52.261 EAL: Cannot find device (10000:00:01.0) 00:04:52.261 EAL: Failed to attach device on primary process 00:04:52.261 passed 00:04:52.261 00:04:52.261 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.261 suites 1 1 n/a 0 0 00:04:52.261 tests 1 1 1 0 0 00:04:52.261 asserts 25 25 25 0 n/a 00:04:52.261 00:04:52.261 Elapsed time = 0.029 seconds 00:04:52.261 00:04:52.261 real 0m0.042s 00:04:52.261 user 0m0.005s 00:04:52.261 sys 0m0.036s 00:04:52.261 13:32:48 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.261 13:32:48 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:52.261 ************************************ 00:04:52.261 END TEST env_pci 00:04:52.261 ************************************ 00:04:52.261 13:32:49 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:52.261 13:32:49 env -- env/env.sh@15 -- # uname 00:04:52.261 13:32:49 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:52.261 13:32:49 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:52.261 13:32:49 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:52.261 13:32:49 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:52.261 13:32:49 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.261 13:32:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:52.261 ************************************ 00:04:52.261 START TEST env_dpdk_post_init 00:04:52.261 ************************************ 00:04:52.261 13:32:49 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:52.261 EAL: Detected CPU lcores: 112 00:04:52.261 EAL: Detected NUMA nodes: 2 00:04:52.261 EAL: Detected shared linkage of DPDK 00:04:52.261 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:52.261 EAL: Selected IOVA mode 'VA' 00:04:52.261 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.261 EAL: VFIO support initialized 00:04:52.520 EAL: Using IOMMU type 1 (Type 1) 00:04:57.817 Starting DPDK initialization... 00:04:57.817 Starting SPDK post initialization... 00:04:57.817 SPDK NVMe probe 00:04:57.817 Attaching to 0000:d8:00.0 00:04:57.817 Attached to 0000:d8:00.0 00:04:57.817 Cleaning up... 00:04:57.817 00:04:57.817 real 0m4.956s 00:04:57.817 user 0m3.668s 00:04:57.817 sys 0m0.348s 00:04:57.817 13:32:54 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.817 13:32:54 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:57.817 ************************************ 00:04:57.817 END TEST env_dpdk_post_init 00:04:57.817 ************************************ 00:04:57.817 13:32:54 env -- env/env.sh@26 -- # uname 00:04:57.817 13:32:54 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:57.817 13:32:54 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:57.817 13:32:54 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.817 13:32:54 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.817 13:32:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:57.817 ************************************ 00:04:57.817 START TEST env_mem_callbacks 00:04:57.817 ************************************ 00:04:57.817 13:32:54 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:57.817 EAL: Detected CPU lcores: 112 00:04:57.817 EAL: Detected NUMA nodes: 2 00:04:57.817 EAL: Detected shared linkage of DPDK 00:04:57.817 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:57.817 EAL: Selected IOVA mode 'VA' 00:04:57.817 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.817 EAL: VFIO support initialized 00:04:57.817 00:04:57.817 00:04:57.817 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.817 http://cunit.sourceforge.net/ 00:04:57.817 00:04:57.817 00:04:57.817 Suite: memory 00:04:57.817 Test: test ... 00:04:57.817 register 0x200000200000 2097152 00:04:57.817 malloc 3145728 00:04:57.817 register 0x200000400000 4194304 00:04:57.817 buf 0x200000500000 len 3145728 PASSED 00:04:57.817 malloc 64 00:04:57.817 buf 0x2000004fff40 len 64 PASSED 00:04:57.817 malloc 4194304 00:04:57.817 register 0x200000800000 6291456 00:04:57.817 buf 0x200000a00000 len 4194304 PASSED 00:04:57.817 free 0x200000500000 3145728 00:04:57.817 free 0x2000004fff40 64 00:04:57.817 unregister 0x200000400000 4194304 PASSED 00:04:57.817 free 0x200000a00000 4194304 00:04:57.817 unregister 0x200000800000 6291456 PASSED 00:04:57.817 malloc 8388608 00:04:57.817 register 0x200000400000 10485760 00:04:57.817 buf 0x200000600000 len 8388608 PASSED 00:04:57.817 free 0x200000600000 8388608 00:04:57.817 unregister 0x200000400000 10485760 PASSED 00:04:57.817 passed 00:04:57.817 00:04:57.817 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.817 suites 1 1 n/a 0 0 00:04:57.817 tests 1 1 1 0 0 00:04:57.817 asserts 15 15 15 0 n/a 00:04:57.817 00:04:57.817 Elapsed time = 0.006 seconds 00:04:57.817 00:04:57.817 real 0m0.068s 00:04:57.817 user 0m0.024s 00:04:57.817 sys 0m0.044s 00:04:57.817 13:32:54 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.817 13:32:54 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:57.817 ************************************ 00:04:57.817 END TEST env_mem_callbacks 00:04:57.817 ************************************ 00:04:57.817 00:04:57.817 real 0m6.758s 00:04:57.817 user 0m4.623s 00:04:57.817 sys 0m1.194s 00:04:57.817 13:32:54 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.817 13:32:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:57.817 ************************************ 00:04:57.817 END TEST env 00:04:57.817 ************************************ 00:04:57.817 13:32:54 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:57.817 13:32:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.817 13:32:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.817 13:32:54 -- common/autotest_common.sh@10 -- # set +x 00:04:57.817 ************************************ 00:04:57.817 START TEST rpc 00:04:57.817 ************************************ 00:04:57.817 13:32:54 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:57.817 * Looking for test storage... 00:04:57.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:57.817 13:32:54 rpc -- rpc/rpc.sh@65 -- # spdk_pid=74907 00:04:57.817 13:32:54 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.817 13:32:54 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:57.817 13:32:54 rpc -- rpc/rpc.sh@67 -- # waitforlisten 74907 00:04:57.817 13:32:54 rpc -- common/autotest_common.sh@831 -- # '[' -z 74907 ']' 00:04:57.818 13:32:54 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.818 13:32:54 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:57.818 13:32:54 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.818 13:32:54 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:57.818 13:32:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.818 [2024-07-25 13:32:54.474137] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:04:57.818 [2024-07-25 13:32:54.474191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74907 ] 00:04:57.818 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.818 [2024-07-25 13:32:54.509734] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:57.818 [2024-07-25 13:32:54.546064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.818 [2024-07-25 13:32:54.583930] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:57.818 [2024-07-25 13:32:54.583972] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 74907' to capture a snapshot of events at runtime. 00:04:57.818 [2024-07-25 13:32:54.583981] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:57.818 [2024-07-25 13:32:54.583990] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:57.818 [2024-07-25 13:32:54.583997] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid74907 for offline analysis/debug. 00:04:57.818 [2024-07-25 13:32:54.584022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.385 13:32:55 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:58.385 13:32:55 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:58.385 13:32:55 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:58.385 13:32:55 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:58.385 13:32:55 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:58.385 13:32:55 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:58.385 13:32:55 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.385 13:32:55 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.385 13:32:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.644 ************************************ 00:04:58.644 START TEST rpc_integrity 00:04:58.644 ************************************ 00:04:58.644 13:32:55 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:58.644 13:32:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:58.644 13:32:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.644 13:32:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.644 13:32:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.644 13:32:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:58.644 13:32:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:58.644 13:32:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:58.644 13:32:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:58.644 13:32:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.644 13:32:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.644 13:32:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.644 13:32:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:58.644 13:32:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:58.644 13:32:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.644 13:32:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.644 13:32:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.644 13:32:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:58.644 { 00:04:58.644 "name": "Malloc0", 00:04:58.644 "aliases": [ 00:04:58.644 "3545e7ad-0d30-499a-979a-4c9d72477d15" 00:04:58.644 ], 00:04:58.644 "product_name": "Malloc disk", 00:04:58.644 "block_size": 512, 00:04:58.644 "num_blocks": 16384, 00:04:58.644 "uuid": "3545e7ad-0d30-499a-979a-4c9d72477d15", 00:04:58.644 "assigned_rate_limits": { 00:04:58.644 "rw_ios_per_sec": 0, 00:04:58.644 "rw_mbytes_per_sec": 0, 00:04:58.644 "r_mbytes_per_sec": 0, 00:04:58.644 "w_mbytes_per_sec": 0 00:04:58.644 }, 00:04:58.644 "claimed": false, 00:04:58.644 "zoned": false, 00:04:58.644 "supported_io_types": { 00:04:58.644 "read": true, 00:04:58.644 "write": true, 00:04:58.644 "unmap": true, 00:04:58.644 "flush": true, 00:04:58.644 "reset": true, 00:04:58.644 "nvme_admin": false, 00:04:58.644 "nvme_io": false, 00:04:58.644 "nvme_io_md": false, 00:04:58.644 "write_zeroes": true, 00:04:58.644 "zcopy": true, 00:04:58.644 "get_zone_info": false, 00:04:58.644 "zone_management": false, 00:04:58.644 "zone_append": false, 00:04:58.644 "compare": false, 00:04:58.644 "compare_and_write": false, 00:04:58.644 "abort": true, 00:04:58.644 "seek_hole": false, 00:04:58.644 "seek_data": false, 00:04:58.644 "copy": true, 00:04:58.644 "nvme_iov_md": false 00:04:58.644 }, 00:04:58.644 "memory_domains": [ 00:04:58.644 { 00:04:58.644 "dma_device_id": "system", 00:04:58.644 "dma_device_type": 1 00:04:58.644 }, 00:04:58.644 { 00:04:58.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.644 "dma_device_type": 2 00:04:58.644 } 00:04:58.644 ], 00:04:58.644 "driver_specific": {} 00:04:58.644 } 00:04:58.644 ]' 00:04:58.644 13:32:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:58.644 13:32:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:58.644 13:32:55 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:58.644 13:32:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.644 13:32:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.644 [2024-07-25 13:32:55.434915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:58.644 [2024-07-25 13:32:55.434946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:58.644 [2024-07-25 13:32:55.434959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12a7eb0 00:04:58.644 [2024-07-25 13:32:55.434967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:58.644 [2024-07-25 13:32:55.436029] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:58.644 [2024-07-25 13:32:55.436053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:58.644 Passthru0 00:04:58.644 13:32:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.644 13:32:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:58.644 13:32:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.644 13:32:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.644 13:32:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.644 13:32:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:58.644 { 00:04:58.644 "name": "Malloc0", 00:04:58.644 "aliases": [ 00:04:58.644 "3545e7ad-0d30-499a-979a-4c9d72477d15" 00:04:58.644 ], 00:04:58.644 "product_name": "Malloc disk", 00:04:58.644 "block_size": 512, 00:04:58.644 "num_blocks": 16384, 00:04:58.644 "uuid": "3545e7ad-0d30-499a-979a-4c9d72477d15", 00:04:58.644 "assigned_rate_limits": { 00:04:58.644 "rw_ios_per_sec": 0, 00:04:58.644 "rw_mbytes_per_sec": 0, 00:04:58.644 "r_mbytes_per_sec": 0, 00:04:58.644 "w_mbytes_per_sec": 0 00:04:58.644 }, 00:04:58.644 "claimed": true, 00:04:58.644 "claim_type": "exclusive_write", 00:04:58.644 "zoned": false, 00:04:58.644 "supported_io_types": { 00:04:58.644 "read": true, 00:04:58.644 "write": true, 00:04:58.644 "unmap": true, 00:04:58.644 "flush": true, 00:04:58.644 "reset": true, 00:04:58.644 "nvme_admin": false, 00:04:58.644 "nvme_io": false, 00:04:58.644 "nvme_io_md": false, 00:04:58.644 "write_zeroes": true, 00:04:58.644 "zcopy": true, 00:04:58.644 "get_zone_info": false, 00:04:58.644 "zone_management": false, 00:04:58.644 "zone_append": false, 00:04:58.644 "compare": false, 00:04:58.644 "compare_and_write": false, 00:04:58.644 "abort": true, 00:04:58.644 "seek_hole": false, 00:04:58.644 "seek_data": false, 00:04:58.644 "copy": true, 00:04:58.644 "nvme_iov_md": false 00:04:58.644 }, 00:04:58.644 "memory_domains": [ 00:04:58.644 { 00:04:58.644 "dma_device_id": "system", 00:04:58.644 "dma_device_type": 1 00:04:58.644 }, 00:04:58.644 { 00:04:58.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.644 "dma_device_type": 2 00:04:58.644 } 00:04:58.644 ], 00:04:58.644 "driver_specific": {} 00:04:58.644 }, 00:04:58.644 { 00:04:58.644 "name": "Passthru0", 00:04:58.644 "aliases": [ 00:04:58.644 "463494d0-7f34-5ebd-b0f4-32eb6b011a97" 00:04:58.644 ], 00:04:58.644 "product_name": "passthru", 00:04:58.644 "block_size": 512, 00:04:58.644 "num_blocks": 16384, 00:04:58.645 "uuid": "463494d0-7f34-5ebd-b0f4-32eb6b011a97", 00:04:58.645 "assigned_rate_limits": { 00:04:58.645 "rw_ios_per_sec": 0, 00:04:58.645 "rw_mbytes_per_sec": 0, 00:04:58.645 "r_mbytes_per_sec": 0, 00:04:58.645 "w_mbytes_per_sec": 0 00:04:58.645 }, 00:04:58.645 "claimed": false, 00:04:58.645 "zoned": false, 00:04:58.645 "supported_io_types": { 00:04:58.645 "read": true, 00:04:58.645 "write": true, 00:04:58.645 "unmap": true, 00:04:58.645 "flush": true, 00:04:58.645 "reset": true, 00:04:58.645 "nvme_admin": false, 00:04:58.645 "nvme_io": false, 00:04:58.645 "nvme_io_md": false, 00:04:58.645 "write_zeroes": true, 00:04:58.645 "zcopy": true, 00:04:58.645 "get_zone_info": false, 00:04:58.645 "zone_management": false, 00:04:58.645 "zone_append": false, 00:04:58.645 "compare": false, 00:04:58.645 "compare_and_write": false, 00:04:58.645 "abort": true, 00:04:58.645 "seek_hole": false, 00:04:58.645 "seek_data": false, 00:04:58.645 "copy": true, 00:04:58.645 "nvme_iov_md": false 00:04:58.645 }, 00:04:58.645 "memory_domains": [ 00:04:58.645 { 00:04:58.645 "dma_device_id": "system", 00:04:58.645 "dma_device_type": 1 00:04:58.645 }, 00:04:58.645 { 00:04:58.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.645 "dma_device_type": 2 00:04:58.645 } 00:04:58.645 ], 00:04:58.645 "driver_specific": { 00:04:58.645 "passthru": { 00:04:58.645 "name": "Passthru0", 00:04:58.645 "base_bdev_name": "Malloc0" 00:04:58.645 } 00:04:58.645 } 00:04:58.645 } 00:04:58.645 ]' 00:04:58.645 13:32:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:58.645 13:32:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:58.645 13:32:55 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:58.645 13:32:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.645 13:32:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.645 13:32:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.645 13:32:55 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:58.645 13:32:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.645 13:32:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.645 13:32:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.645 13:32:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:58.645 13:32:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.645 13:32:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.645 13:32:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.645 13:32:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:58.904 13:32:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:58.904 13:32:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:58.904 00:04:58.904 real 0m0.275s 00:04:58.904 user 0m0.167s 00:04:58.904 sys 0m0.042s 00:04:58.904 13:32:55 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.904 13:32:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.904 ************************************ 00:04:58.904 END TEST rpc_integrity 00:04:58.904 ************************************ 00:04:58.904 13:32:55 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:58.904 13:32:55 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.904 13:32:55 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.904 13:32:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.904 ************************************ 00:04:58.904 START TEST rpc_plugins 00:04:58.904 ************************************ 00:04:58.904 13:32:55 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:58.904 13:32:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:58.904 13:32:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.904 13:32:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.904 13:32:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.904 13:32:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:58.904 13:32:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:58.904 13:32:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.904 13:32:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.904 13:32:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.904 13:32:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:58.904 { 00:04:58.904 "name": "Malloc1", 00:04:58.904 "aliases": [ 00:04:58.904 "bb137f5f-23f2-46b8-b380-5073e5448175" 00:04:58.904 ], 00:04:58.904 "product_name": "Malloc disk", 00:04:58.904 "block_size": 4096, 00:04:58.904 "num_blocks": 256, 00:04:58.904 "uuid": "bb137f5f-23f2-46b8-b380-5073e5448175", 00:04:58.904 "assigned_rate_limits": { 00:04:58.904 "rw_ios_per_sec": 0, 00:04:58.904 "rw_mbytes_per_sec": 0, 00:04:58.904 "r_mbytes_per_sec": 0, 00:04:58.904 "w_mbytes_per_sec": 0 00:04:58.904 }, 00:04:58.904 "claimed": false, 00:04:58.904 "zoned": false, 00:04:58.904 "supported_io_types": { 00:04:58.904 "read": true, 00:04:58.904 "write": true, 00:04:58.904 "unmap": true, 00:04:58.904 "flush": true, 00:04:58.904 "reset": true, 00:04:58.904 "nvme_admin": false, 00:04:58.904 "nvme_io": false, 00:04:58.904 "nvme_io_md": false, 00:04:58.904 "write_zeroes": true, 00:04:58.904 "zcopy": true, 00:04:58.904 "get_zone_info": false, 00:04:58.904 "zone_management": false, 00:04:58.904 "zone_append": false, 00:04:58.904 "compare": false, 00:04:58.904 "compare_and_write": false, 00:04:58.904 "abort": true, 00:04:58.904 "seek_hole": false, 00:04:58.904 "seek_data": false, 00:04:58.904 "copy": true, 00:04:58.904 "nvme_iov_md": false 00:04:58.904 }, 00:04:58.904 "memory_domains": [ 00:04:58.904 { 00:04:58.904 "dma_device_id": "system", 00:04:58.904 "dma_device_type": 1 00:04:58.904 }, 00:04:58.904 { 00:04:58.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.904 "dma_device_type": 2 00:04:58.904 } 00:04:58.904 ], 00:04:58.904 "driver_specific": {} 00:04:58.904 } 00:04:58.904 ]' 00:04:58.904 13:32:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:58.904 13:32:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:58.904 13:32:55 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:58.904 13:32:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.904 13:32:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.904 13:32:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.904 13:32:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:58.904 13:32:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.904 13:32:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.904 13:32:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.904 13:32:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:58.904 13:32:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:58.904 13:32:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:58.904 00:04:58.904 real 0m0.132s 00:04:58.904 user 0m0.081s 00:04:58.904 sys 0m0.017s 00:04:58.904 13:32:55 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.904 13:32:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.904 ************************************ 00:04:58.904 END TEST rpc_plugins 00:04:58.904 ************************************ 00:04:59.163 13:32:55 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:59.163 13:32:55 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.163 13:32:55 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.163 13:32:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.163 ************************************ 00:04:59.163 START TEST rpc_trace_cmd_test 00:04:59.163 ************************************ 00:04:59.163 13:32:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:59.164 13:32:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:59.164 13:32:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:59.164 13:32:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.164 13:32:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:59.164 13:32:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.164 13:32:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:59.164 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid74907", 00:04:59.164 "tpoint_group_mask": "0x8", 00:04:59.164 "iscsi_conn": { 00:04:59.164 "mask": "0x2", 00:04:59.164 "tpoint_mask": "0x0" 00:04:59.164 }, 00:04:59.164 "scsi": { 00:04:59.164 "mask": "0x4", 00:04:59.164 "tpoint_mask": "0x0" 00:04:59.164 }, 00:04:59.164 "bdev": { 00:04:59.164 "mask": "0x8", 00:04:59.164 "tpoint_mask": "0xffffffffffffffff" 00:04:59.164 }, 00:04:59.164 "nvmf_rdma": { 00:04:59.164 "mask": "0x10", 00:04:59.164 "tpoint_mask": "0x0" 00:04:59.164 }, 00:04:59.164 "nvmf_tcp": { 00:04:59.164 "mask": "0x20", 00:04:59.164 "tpoint_mask": "0x0" 00:04:59.164 }, 00:04:59.164 "ftl": { 00:04:59.164 "mask": "0x40", 00:04:59.164 "tpoint_mask": "0x0" 00:04:59.164 }, 00:04:59.164 "blobfs": { 00:04:59.164 "mask": "0x80", 00:04:59.164 "tpoint_mask": "0x0" 00:04:59.164 }, 00:04:59.164 "dsa": { 00:04:59.164 "mask": "0x200", 00:04:59.164 "tpoint_mask": "0x0" 00:04:59.164 }, 00:04:59.164 "thread": { 00:04:59.164 "mask": "0x400", 00:04:59.164 "tpoint_mask": "0x0" 00:04:59.164 }, 00:04:59.164 "nvme_pcie": { 00:04:59.164 "mask": "0x800", 00:04:59.164 "tpoint_mask": "0x0" 00:04:59.164 }, 00:04:59.164 "iaa": { 00:04:59.164 "mask": "0x1000", 00:04:59.164 "tpoint_mask": "0x0" 00:04:59.164 }, 00:04:59.164 "nvme_tcp": { 00:04:59.164 "mask": "0x2000", 00:04:59.164 "tpoint_mask": "0x0" 00:04:59.164 }, 00:04:59.164 "bdev_nvme": { 00:04:59.164 "mask": "0x4000", 00:04:59.164 "tpoint_mask": "0x0" 00:04:59.164 }, 00:04:59.164 "sock": { 00:04:59.164 "mask": "0x8000", 00:04:59.164 "tpoint_mask": "0x0" 00:04:59.164 } 00:04:59.164 }' 00:04:59.164 13:32:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:59.164 13:32:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:59.164 13:32:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:59.164 13:32:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:59.164 13:32:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:59.164 13:32:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:59.164 13:32:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:59.164 13:32:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:59.164 13:32:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:59.164 13:32:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:59.164 00:04:59.164 real 0m0.183s 00:04:59.164 user 0m0.147s 00:04:59.164 sys 0m0.027s 00:04:59.164 13:32:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.164 13:32:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:59.164 ************************************ 00:04:59.164 END TEST rpc_trace_cmd_test 00:04:59.164 ************************************ 00:04:59.423 13:32:56 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:59.423 13:32:56 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:59.423 13:32:56 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:59.423 13:32:56 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.423 13:32:56 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.423 13:32:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.423 ************************************ 00:04:59.423 START TEST rpc_daemon_integrity 00:04:59.423 ************************************ 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:59.423 { 00:04:59.423 "name": "Malloc2", 00:04:59.423 "aliases": [ 00:04:59.423 "6e1ca1b1-7ec7-49ca-8e35-8019021e7eac" 00:04:59.423 ], 00:04:59.423 "product_name": "Malloc disk", 00:04:59.423 "block_size": 512, 00:04:59.423 "num_blocks": 16384, 00:04:59.423 "uuid": "6e1ca1b1-7ec7-49ca-8e35-8019021e7eac", 00:04:59.423 "assigned_rate_limits": { 00:04:59.423 "rw_ios_per_sec": 0, 00:04:59.423 "rw_mbytes_per_sec": 0, 00:04:59.423 "r_mbytes_per_sec": 0, 00:04:59.423 "w_mbytes_per_sec": 0 00:04:59.423 }, 00:04:59.423 "claimed": false, 00:04:59.423 "zoned": false, 00:04:59.423 "supported_io_types": { 00:04:59.423 "read": true, 00:04:59.423 "write": true, 00:04:59.423 "unmap": true, 00:04:59.423 "flush": true, 00:04:59.423 "reset": true, 00:04:59.423 "nvme_admin": false, 00:04:59.423 "nvme_io": false, 00:04:59.423 "nvme_io_md": false, 00:04:59.423 "write_zeroes": true, 00:04:59.423 "zcopy": true, 00:04:59.423 "get_zone_info": false, 00:04:59.423 "zone_management": false, 00:04:59.423 "zone_append": false, 00:04:59.423 "compare": false, 00:04:59.423 "compare_and_write": false, 00:04:59.423 "abort": true, 00:04:59.423 "seek_hole": false, 00:04:59.423 "seek_data": false, 00:04:59.423 "copy": true, 00:04:59.423 "nvme_iov_md": false 00:04:59.423 }, 00:04:59.423 "memory_domains": [ 00:04:59.423 { 00:04:59.423 "dma_device_id": "system", 00:04:59.423 "dma_device_type": 1 00:04:59.423 }, 00:04:59.423 { 00:04:59.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.423 "dma_device_type": 2 00:04:59.423 } 00:04:59.423 ], 00:04:59.423 "driver_specific": {} 00:04:59.423 } 00:04:59.423 ]' 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.423 [2024-07-25 13:32:56.249094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:59.423 [2024-07-25 13:32:56.249121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:59.423 [2024-07-25 13:32:56.249135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x129f8a0 00:04:59.423 [2024-07-25 13:32:56.249143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:59.423 [2024-07-25 13:32:56.250037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:59.423 [2024-07-25 13:32:56.250059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:59.423 Passthru0 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.423 13:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:59.423 { 00:04:59.423 "name": "Malloc2", 00:04:59.423 "aliases": [ 00:04:59.423 "6e1ca1b1-7ec7-49ca-8e35-8019021e7eac" 00:04:59.424 ], 00:04:59.424 "product_name": "Malloc disk", 00:04:59.424 "block_size": 512, 00:04:59.424 "num_blocks": 16384, 00:04:59.424 "uuid": "6e1ca1b1-7ec7-49ca-8e35-8019021e7eac", 00:04:59.424 "assigned_rate_limits": { 00:04:59.424 "rw_ios_per_sec": 0, 00:04:59.424 "rw_mbytes_per_sec": 0, 00:04:59.424 "r_mbytes_per_sec": 0, 00:04:59.424 "w_mbytes_per_sec": 0 00:04:59.424 }, 00:04:59.424 "claimed": true, 00:04:59.424 "claim_type": "exclusive_write", 00:04:59.424 "zoned": false, 00:04:59.424 "supported_io_types": { 00:04:59.424 "read": true, 00:04:59.424 "write": true, 00:04:59.424 "unmap": true, 00:04:59.424 "flush": true, 00:04:59.424 "reset": true, 00:04:59.424 "nvme_admin": false, 00:04:59.424 "nvme_io": false, 00:04:59.424 "nvme_io_md": false, 00:04:59.424 "write_zeroes": true, 00:04:59.424 "zcopy": true, 00:04:59.424 "get_zone_info": false, 00:04:59.424 "zone_management": false, 00:04:59.424 "zone_append": false, 00:04:59.424 "compare": false, 00:04:59.424 "compare_and_write": false, 00:04:59.424 "abort": true, 00:04:59.424 "seek_hole": false, 00:04:59.424 "seek_data": false, 00:04:59.424 "copy": true, 00:04:59.424 "nvme_iov_md": false 00:04:59.424 }, 00:04:59.424 "memory_domains": [ 00:04:59.424 { 00:04:59.424 "dma_device_id": "system", 00:04:59.424 "dma_device_type": 1 00:04:59.424 }, 00:04:59.424 { 00:04:59.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.424 "dma_device_type": 2 00:04:59.424 } 00:04:59.424 ], 00:04:59.424 "driver_specific": {} 00:04:59.424 }, 00:04:59.424 { 00:04:59.424 "name": "Passthru0", 00:04:59.424 "aliases": [ 00:04:59.424 "fea15bb4-1970-5f15-b468-010f780bab92" 00:04:59.424 ], 00:04:59.424 "product_name": "passthru", 00:04:59.424 "block_size": 512, 00:04:59.424 "num_blocks": 16384, 00:04:59.424 "uuid": "fea15bb4-1970-5f15-b468-010f780bab92", 00:04:59.424 "assigned_rate_limits": { 00:04:59.424 "rw_ios_per_sec": 0, 00:04:59.424 "rw_mbytes_per_sec": 0, 00:04:59.424 "r_mbytes_per_sec": 0, 00:04:59.424 "w_mbytes_per_sec": 0 00:04:59.424 }, 00:04:59.424 "claimed": false, 00:04:59.424 "zoned": false, 00:04:59.424 "supported_io_types": { 00:04:59.424 "read": true, 00:04:59.424 "write": true, 00:04:59.424 "unmap": true, 00:04:59.424 "flush": true, 00:04:59.424 "reset": true, 00:04:59.424 "nvme_admin": false, 00:04:59.424 "nvme_io": false, 00:04:59.424 "nvme_io_md": false, 00:04:59.424 "write_zeroes": true, 00:04:59.424 "zcopy": true, 00:04:59.424 "get_zone_info": false, 00:04:59.424 "zone_management": false, 00:04:59.424 "zone_append": false, 00:04:59.424 "compare": false, 00:04:59.424 "compare_and_write": false, 00:04:59.424 "abort": true, 00:04:59.424 "seek_hole": false, 00:04:59.424 "seek_data": false, 00:04:59.424 "copy": true, 00:04:59.424 "nvme_iov_md": false 00:04:59.424 }, 00:04:59.424 "memory_domains": [ 00:04:59.424 { 00:04:59.424 "dma_device_id": "system", 00:04:59.424 "dma_device_type": 1 00:04:59.424 }, 00:04:59.424 { 00:04:59.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.424 "dma_device_type": 2 00:04:59.424 } 00:04:59.424 ], 00:04:59.424 "driver_specific": { 00:04:59.424 "passthru": { 00:04:59.424 "name": "Passthru0", 00:04:59.424 "base_bdev_name": "Malloc2" 00:04:59.424 } 00:04:59.424 } 00:04:59.424 } 00:04:59.424 ]' 00:04:59.424 13:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:59.683 13:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:59.683 13:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:59.683 13:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.683 13:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.683 13:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.683 13:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:59.683 13:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.683 13:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.683 13:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.683 13:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:59.683 13:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.683 13:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.683 13:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.683 13:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:59.683 13:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:59.683 13:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:59.683 00:04:59.683 real 0m0.248s 00:04:59.683 user 0m0.165s 00:04:59.683 sys 0m0.025s 00:04:59.683 13:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.683 13:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.683 ************************************ 00:04:59.683 END TEST rpc_daemon_integrity 00:04:59.683 ************************************ 00:04:59.683 13:32:56 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:59.683 13:32:56 rpc -- rpc/rpc.sh@84 -- # killprocess 74907 00:04:59.683 13:32:56 rpc -- common/autotest_common.sh@950 -- # '[' -z 74907 ']' 00:04:59.683 13:32:56 rpc -- common/autotest_common.sh@954 -- # kill -0 74907 00:04:59.683 13:32:56 rpc -- common/autotest_common.sh@955 -- # uname 00:04:59.683 13:32:56 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:59.683 13:32:56 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74907 00:04:59.683 13:32:56 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:59.683 13:32:56 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:59.683 13:32:56 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74907' 00:04:59.683 killing process with pid 74907 00:04:59.683 13:32:56 rpc -- common/autotest_common.sh@969 -- # kill 74907 00:04:59.683 13:32:56 rpc -- common/autotest_common.sh@974 -- # wait 74907 00:04:59.942 00:04:59.942 real 0m2.446s 00:04:59.942 user 0m3.089s 00:04:59.942 sys 0m0.742s 00:04:59.942 13:32:56 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.942 13:32:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.942 ************************************ 00:04:59.942 END TEST rpc 00:04:59.942 ************************************ 00:04:59.942 13:32:56 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:59.942 13:32:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.942 13:32:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.942 13:32:56 -- common/autotest_common.sh@10 -- # set +x 00:05:00.202 ************************************ 00:05:00.202 START TEST skip_rpc 00:05:00.202 ************************************ 00:05:00.202 13:32:56 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:00.202 * Looking for test storage... 00:05:00.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:00.202 13:32:56 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:00.202 13:32:56 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:00.202 13:32:56 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:00.202 13:32:56 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.202 13:32:56 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.202 13:32:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.202 ************************************ 00:05:00.202 START TEST skip_rpc 00:05:00.202 ************************************ 00:05:00.202 13:32:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:00.202 13:32:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=75601 00:05:00.202 13:32:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.202 13:32:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:00.202 13:32:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:00.202 [2024-07-25 13:32:57.033068] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:00.202 [2024-07-25 13:32:57.033117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75601 ] 00:05:00.202 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.202 [2024-07-25 13:32:57.067489] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:00.461 [2024-07-25 13:32:57.101871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.461 [2024-07-25 13:32:57.139432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.733 13:33:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:05.733 13:33:01 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:05.733 13:33:01 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:05.733 13:33:01 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:05.733 13:33:01 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:05.733 13:33:01 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:05.733 13:33:01 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:05.733 13:33:01 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:05.733 13:33:01 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.733 13:33:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.733 13:33:01 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:05.733 13:33:01 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:05.733 13:33:01 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:05.733 13:33:01 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:05.733 13:33:01 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:05.733 13:33:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:05.733 13:33:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 75601 00:05:05.733 13:33:01 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 75601 ']' 00:05:05.733 13:33:01 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 75601 00:05:05.733 13:33:01 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:05.733 13:33:01 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:05.733 13:33:01 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75601 00:05:05.733 13:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:05.733 13:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:05.733 13:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75601' 00:05:05.733 killing process with pid 75601 00:05:05.733 13:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 75601 00:05:05.733 13:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 75601 00:05:05.733 00:05:05.733 real 0m5.351s 00:05:05.733 user 0m5.117s 00:05:05.733 sys 0m0.265s 00:05:05.733 13:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.733 13:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.733 ************************************ 00:05:05.733 END TEST skip_rpc 00:05:05.733 ************************************ 00:05:05.733 13:33:02 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:05.733 13:33:02 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.733 13:33:02 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.733 13:33:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.733 ************************************ 00:05:05.733 START TEST skip_rpc_with_json 00:05:05.733 ************************************ 00:05:05.733 13:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:05.733 13:33:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:05.733 13:33:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=76557 00:05:05.733 13:33:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.733 13:33:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 76557 00:05:05.733 13:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 76557 ']' 00:05:05.733 13:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.733 13:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:05.733 13:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.733 13:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:05.733 13:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:05.733 13:33:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.733 [2024-07-25 13:33:02.458747] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:05.733 [2024-07-25 13:33:02.458792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76557 ] 00:05:05.733 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.733 [2024-07-25 13:33:02.493777] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:05.733 [2024-07-25 13:33:02.527712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.733 [2024-07-25 13:33:02.567824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.669 13:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:06.669 13:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:06.669 13:33:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:06.669 13:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.669 13:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.669 [2024-07-25 13:33:03.246307] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:06.669 request: 00:05:06.669 { 00:05:06.669 "trtype": "tcp", 00:05:06.669 "method": "nvmf_get_transports", 00:05:06.669 "req_id": 1 00:05:06.669 } 00:05:06.669 Got JSON-RPC error response 00:05:06.669 response: 00:05:06.669 { 00:05:06.669 "code": -19, 00:05:06.669 "message": "No such device" 00:05:06.669 } 00:05:06.669 13:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:06.669 13:33:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:06.669 13:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.669 13:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.669 [2024-07-25 13:33:03.254392] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:06.669 13:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.669 13:33:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:06.669 13:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.669 13:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.669 13:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.669 13:33:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:06.669 { 00:05:06.669 "subsystems": [ 00:05:06.669 { 00:05:06.669 "subsystem": "vfio_user_target", 00:05:06.669 "config": null 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "subsystem": "keyring", 00:05:06.669 "config": [] 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "subsystem": "iobuf", 00:05:06.669 "config": [ 00:05:06.669 { 00:05:06.669 "method": "iobuf_set_options", 00:05:06.669 "params": { 00:05:06.669 "small_pool_count": 8192, 00:05:06.669 "large_pool_count": 1024, 00:05:06.669 "small_bufsize": 8192, 00:05:06.669 "large_bufsize": 135168 00:05:06.669 } 00:05:06.669 } 00:05:06.669 ] 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "subsystem": "sock", 00:05:06.669 "config": [ 00:05:06.669 { 00:05:06.669 "method": "sock_set_default_impl", 00:05:06.669 "params": { 00:05:06.669 "impl_name": "posix" 00:05:06.669 } 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "method": "sock_impl_set_options", 00:05:06.669 "params": { 00:05:06.669 "impl_name": "ssl", 00:05:06.669 "recv_buf_size": 4096, 00:05:06.669 "send_buf_size": 4096, 00:05:06.669 "enable_recv_pipe": true, 00:05:06.669 "enable_quickack": false, 00:05:06.669 "enable_placement_id": 0, 00:05:06.669 "enable_zerocopy_send_server": true, 00:05:06.669 "enable_zerocopy_send_client": false, 00:05:06.669 "zerocopy_threshold": 0, 00:05:06.669 "tls_version": 0, 00:05:06.669 "enable_ktls": false 00:05:06.669 } 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "method": "sock_impl_set_options", 00:05:06.669 "params": { 00:05:06.669 "impl_name": "posix", 00:05:06.669 "recv_buf_size": 2097152, 00:05:06.669 "send_buf_size": 2097152, 00:05:06.669 "enable_recv_pipe": true, 00:05:06.669 "enable_quickack": false, 00:05:06.669 "enable_placement_id": 0, 00:05:06.669 "enable_zerocopy_send_server": true, 00:05:06.669 "enable_zerocopy_send_client": false, 00:05:06.669 "zerocopy_threshold": 0, 00:05:06.669 "tls_version": 0, 00:05:06.669 "enable_ktls": false 00:05:06.669 } 00:05:06.669 } 00:05:06.669 ] 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "subsystem": "vmd", 00:05:06.669 "config": [] 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "subsystem": "accel", 00:05:06.669 "config": [ 00:05:06.669 { 00:05:06.669 "method": "accel_set_options", 00:05:06.669 "params": { 00:05:06.669 "small_cache_size": 128, 00:05:06.669 "large_cache_size": 16, 00:05:06.669 "task_count": 2048, 00:05:06.669 "sequence_count": 2048, 00:05:06.669 "buf_count": 2048 00:05:06.669 } 00:05:06.669 } 00:05:06.669 ] 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "subsystem": "bdev", 00:05:06.669 "config": [ 00:05:06.669 { 00:05:06.669 "method": "bdev_set_options", 00:05:06.669 "params": { 00:05:06.669 "bdev_io_pool_size": 65535, 00:05:06.669 "bdev_io_cache_size": 256, 00:05:06.669 "bdev_auto_examine": true, 00:05:06.669 "iobuf_small_cache_size": 128, 00:05:06.669 "iobuf_large_cache_size": 16 00:05:06.669 } 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "method": "bdev_raid_set_options", 00:05:06.669 "params": { 00:05:06.669 "process_window_size_kb": 1024, 00:05:06.669 "process_max_bandwidth_mb_sec": 0 00:05:06.669 } 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "method": "bdev_iscsi_set_options", 00:05:06.669 "params": { 00:05:06.669 "timeout_sec": 30 00:05:06.669 } 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "method": "bdev_nvme_set_options", 00:05:06.669 "params": { 00:05:06.669 "action_on_timeout": "none", 00:05:06.669 "timeout_us": 0, 00:05:06.669 "timeout_admin_us": 0, 00:05:06.669 "keep_alive_timeout_ms": 10000, 00:05:06.669 "arbitration_burst": 0, 00:05:06.669 "low_priority_weight": 0, 00:05:06.669 "medium_priority_weight": 0, 00:05:06.669 "high_priority_weight": 0, 00:05:06.670 "nvme_adminq_poll_period_us": 10000, 00:05:06.670 "nvme_ioq_poll_period_us": 0, 00:05:06.670 "io_queue_requests": 0, 00:05:06.670 "delay_cmd_submit": true, 00:05:06.670 "transport_retry_count": 4, 00:05:06.670 "bdev_retry_count": 3, 00:05:06.670 "transport_ack_timeout": 0, 00:05:06.670 "ctrlr_loss_timeout_sec": 0, 00:05:06.670 "reconnect_delay_sec": 0, 00:05:06.670 "fast_io_fail_timeout_sec": 0, 00:05:06.670 "disable_auto_failback": false, 00:05:06.670 "generate_uuids": false, 00:05:06.670 "transport_tos": 0, 00:05:06.670 "nvme_error_stat": false, 00:05:06.670 "rdma_srq_size": 0, 00:05:06.670 "io_path_stat": false, 00:05:06.670 "allow_accel_sequence": false, 00:05:06.670 "rdma_max_cq_size": 0, 00:05:06.670 "rdma_cm_event_timeout_ms": 0, 00:05:06.670 "dhchap_digests": [ 00:05:06.670 "sha256", 00:05:06.670 "sha384", 00:05:06.670 "sha512" 00:05:06.670 ], 00:05:06.670 "dhchap_dhgroups": [ 00:05:06.670 "null", 00:05:06.670 "ffdhe2048", 00:05:06.670 "ffdhe3072", 00:05:06.670 "ffdhe4096", 00:05:06.670 "ffdhe6144", 00:05:06.670 "ffdhe8192" 00:05:06.670 ] 00:05:06.670 } 00:05:06.670 }, 00:05:06.670 { 00:05:06.670 "method": "bdev_nvme_set_hotplug", 00:05:06.670 "params": { 00:05:06.670 "period_us": 100000, 00:05:06.670 "enable": false 00:05:06.670 } 00:05:06.670 }, 00:05:06.670 { 00:05:06.670 "method": "bdev_wait_for_examine" 00:05:06.670 } 00:05:06.670 ] 00:05:06.670 }, 00:05:06.670 { 00:05:06.670 "subsystem": "scsi", 00:05:06.670 "config": null 00:05:06.670 }, 00:05:06.670 { 00:05:06.670 "subsystem": "scheduler", 00:05:06.670 "config": [ 00:05:06.670 { 00:05:06.670 "method": "framework_set_scheduler", 00:05:06.670 "params": { 00:05:06.670 "name": "static" 00:05:06.670 } 00:05:06.670 } 00:05:06.670 ] 00:05:06.670 }, 00:05:06.670 { 00:05:06.670 "subsystem": "vhost_scsi", 00:05:06.670 "config": [] 00:05:06.670 }, 00:05:06.670 { 00:05:06.670 "subsystem": "vhost_blk", 00:05:06.670 "config": [] 00:05:06.670 }, 00:05:06.670 { 00:05:06.670 "subsystem": "ublk", 00:05:06.670 "config": [] 00:05:06.670 }, 00:05:06.670 { 00:05:06.670 "subsystem": "nbd", 00:05:06.670 "config": [] 00:05:06.670 }, 00:05:06.670 { 00:05:06.670 "subsystem": "nvmf", 00:05:06.670 "config": [ 00:05:06.670 { 00:05:06.670 "method": "nvmf_set_config", 00:05:06.670 "params": { 00:05:06.670 "discovery_filter": "match_any", 00:05:06.670 "admin_cmd_passthru": { 00:05:06.670 "identify_ctrlr": false 00:05:06.670 } 00:05:06.670 } 00:05:06.670 }, 00:05:06.670 { 00:05:06.670 "method": "nvmf_set_max_subsystems", 00:05:06.670 "params": { 00:05:06.670 "max_subsystems": 1024 00:05:06.670 } 00:05:06.670 }, 00:05:06.670 { 00:05:06.670 "method": "nvmf_set_crdt", 00:05:06.670 "params": { 00:05:06.670 "crdt1": 0, 00:05:06.670 "crdt2": 0, 00:05:06.670 "crdt3": 0 00:05:06.670 } 00:05:06.670 }, 00:05:06.670 { 00:05:06.670 "method": "nvmf_create_transport", 00:05:06.670 "params": { 00:05:06.670 "trtype": "TCP", 00:05:06.670 "max_queue_depth": 128, 00:05:06.670 "max_io_qpairs_per_ctrlr": 127, 00:05:06.670 "in_capsule_data_size": 4096, 00:05:06.670 "max_io_size": 131072, 00:05:06.670 "io_unit_size": 131072, 00:05:06.670 "max_aq_depth": 128, 00:05:06.670 "num_shared_buffers": 511, 00:05:06.670 "buf_cache_size": 4294967295, 00:05:06.670 "dif_insert_or_strip": false, 00:05:06.670 "zcopy": false, 00:05:06.670 "c2h_success": true, 00:05:06.670 "sock_priority": 0, 00:05:06.670 "abort_timeout_sec": 1, 00:05:06.670 "ack_timeout": 0, 00:05:06.670 "data_wr_pool_size": 0 00:05:06.670 } 00:05:06.670 } 00:05:06.670 ] 00:05:06.670 }, 00:05:06.670 { 00:05:06.670 "subsystem": "iscsi", 00:05:06.670 "config": [ 00:05:06.670 { 00:05:06.670 "method": "iscsi_set_options", 00:05:06.670 "params": { 00:05:06.670 "node_base": "iqn.2016-06.io.spdk", 00:05:06.670 "max_sessions": 128, 00:05:06.670 "max_connections_per_session": 2, 00:05:06.670 "max_queue_depth": 64, 00:05:06.670 "default_time2wait": 2, 00:05:06.670 "default_time2retain": 20, 00:05:06.670 "first_burst_length": 8192, 00:05:06.670 "immediate_data": true, 00:05:06.670 "allow_duplicated_isid": false, 00:05:06.670 "error_recovery_level": 0, 00:05:06.670 "nop_timeout": 60, 00:05:06.670 "nop_in_interval": 30, 00:05:06.670 "disable_chap": false, 00:05:06.670 "require_chap": false, 00:05:06.670 "mutual_chap": false, 00:05:06.670 "chap_group": 0, 00:05:06.670 "max_large_datain_per_connection": 64, 00:05:06.670 "max_r2t_per_connection": 4, 00:05:06.670 "pdu_pool_size": 36864, 00:05:06.670 "immediate_data_pool_size": 16384, 00:05:06.670 "data_out_pool_size": 2048 00:05:06.670 } 00:05:06.670 } 00:05:06.670 ] 00:05:06.670 } 00:05:06.670 ] 00:05:06.670 } 00:05:06.670 13:33:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:06.670 13:33:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 76557 00:05:06.670 13:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 76557 ']' 00:05:06.670 13:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 76557 00:05:06.670 13:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:06.670 13:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:06.670 13:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76557 00:05:06.670 13:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:06.670 13:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:06.670 13:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76557' 00:05:06.670 killing process with pid 76557 00:05:06.670 13:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 76557 00:05:06.670 13:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 76557 00:05:06.929 13:33:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=76882 00:05:06.929 13:33:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:06.929 13:33:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:12.201 13:33:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 76882 00:05:12.201 13:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 76882 ']' 00:05:12.201 13:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 76882 00:05:12.201 13:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:12.201 13:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:12.201 13:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76882 00:05:12.201 13:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:12.201 13:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:12.201 13:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76882' 00:05:12.201 killing process with pid 76882 00:05:12.201 13:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 76882 00:05:12.201 13:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 76882 00:05:12.461 13:33:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:12.461 13:33:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:12.461 00:05:12.461 real 0m6.706s 00:05:12.461 user 0m6.486s 00:05:12.461 sys 0m0.634s 00:05:12.461 13:33:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.461 13:33:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.461 ************************************ 00:05:12.461 END TEST skip_rpc_with_json 00:05:12.461 ************************************ 00:05:12.461 13:33:09 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:12.461 13:33:09 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.461 13:33:09 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.461 13:33:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.461 ************************************ 00:05:12.461 START TEST skip_rpc_with_delay 00:05:12.461 ************************************ 00:05:12.461 13:33:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:12.461 13:33:09 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:12.461 13:33:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:12.461 13:33:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:12.461 13:33:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.461 13:33:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:12.461 13:33:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.461 13:33:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:12.461 13:33:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.461 13:33:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:12.461 13:33:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.461 13:33:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:12.461 13:33:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:12.461 [2024-07-25 13:33:09.228919] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:12.461 [2024-07-25 13:33:09.228983] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:12.461 13:33:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:12.461 13:33:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:12.461 13:33:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:12.461 13:33:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:12.461 00:05:12.461 real 0m0.058s 00:05:12.461 user 0m0.030s 00:05:12.461 sys 0m0.027s 00:05:12.461 13:33:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.461 13:33:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:12.461 ************************************ 00:05:12.461 END TEST skip_rpc_with_delay 00:05:12.461 ************************************ 00:05:12.461 13:33:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:12.461 13:33:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:12.461 13:33:09 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:12.461 13:33:09 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.461 13:33:09 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.461 13:33:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.461 ************************************ 00:05:12.461 START TEST exit_on_failed_rpc_init 00:05:12.461 ************************************ 00:05:12.461 13:33:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:12.461 13:33:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=78347 00:05:12.461 13:33:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 78347 00:05:12.461 13:33:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 78347 ']' 00:05:12.461 13:33:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.461 13:33:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.461 13:33:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.461 13:33:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.461 13:33:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:12.461 13:33:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.721 [2024-07-25 13:33:09.365625] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:12.721 [2024-07-25 13:33:09.365671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78347 ] 00:05:12.721 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.721 [2024-07-25 13:33:09.400784] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:12.721 [2024-07-25 13:33:09.435511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.721 [2024-07-25 13:33:09.475161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.289 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.289 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:13.289 13:33:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.289 13:33:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:13.289 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:13.289 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:13.289 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.289 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.289 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.289 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.289 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.289 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.289 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.289 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:13.289 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:13.547 [2024-07-25 13:33:10.198657] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:13.547 [2024-07-25 13:33:10.198713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78365 ] 00:05:13.547 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.547 [2024-07-25 13:33:10.234180] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:13.547 [2024-07-25 13:33:10.269162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.547 [2024-07-25 13:33:10.307449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.547 [2024-07-25 13:33:10.307521] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:13.547 [2024-07-25 13:33:10.307533] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:13.547 [2024-07-25 13:33:10.307541] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:13.547 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:13.547 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:13.547 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:13.547 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:13.547 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:13.547 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:13.547 13:33:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:13.547 13:33:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 78347 00:05:13.547 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 78347 ']' 00:05:13.547 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 78347 00:05:13.547 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:13.547 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.548 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78347 00:05:13.548 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:13.548 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:13.548 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78347' 00:05:13.548 killing process with pid 78347 00:05:13.548 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 78347 00:05:13.548 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 78347 00:05:14.115 00:05:14.115 real 0m1.395s 00:05:14.115 user 0m1.519s 00:05:14.115 sys 0m0.446s 00:05:14.115 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.115 13:33:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:14.115 ************************************ 00:05:14.115 END TEST exit_on_failed_rpc_init 00:05:14.115 ************************************ 00:05:14.115 13:33:10 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:14.115 00:05:14.115 real 0m13.905s 00:05:14.115 user 0m13.297s 00:05:14.115 sys 0m1.647s 00:05:14.115 13:33:10 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.115 13:33:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.115 ************************************ 00:05:14.115 END TEST skip_rpc 00:05:14.115 ************************************ 00:05:14.115 13:33:10 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:14.115 13:33:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.115 13:33:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.115 13:33:10 -- common/autotest_common.sh@10 -- # set +x 00:05:14.115 ************************************ 00:05:14.115 START TEST rpc_client 00:05:14.115 ************************************ 00:05:14.115 13:33:10 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:14.115 * Looking for test storage... 00:05:14.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:14.115 13:33:10 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:14.115 OK 00:05:14.115 13:33:10 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:14.115 00:05:14.115 real 0m0.131s 00:05:14.115 user 0m0.062s 00:05:14.115 sys 0m0.078s 00:05:14.115 13:33:10 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.115 13:33:10 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:14.115 ************************************ 00:05:14.115 END TEST rpc_client 00:05:14.115 ************************************ 00:05:14.375 13:33:11 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:14.375 13:33:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.375 13:33:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.375 13:33:11 -- common/autotest_common.sh@10 -- # set +x 00:05:14.375 ************************************ 00:05:14.375 START TEST json_config 00:05:14.375 ************************************ 00:05:14.375 13:33:11 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:14.375 13:33:11 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:14.375 13:33:11 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:14.375 13:33:11 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:14.375 13:33:11 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:14.375 13:33:11 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:14.375 13:33:11 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:14.375 13:33:11 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:14.375 13:33:11 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:14.375 13:33:11 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:14.375 13:33:11 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:14.375 13:33:11 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:14.375 13:33:11 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:14.375 13:33:11 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:05:14.375 13:33:11 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:05:14.375 13:33:11 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:14.375 13:33:11 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:14.375 13:33:11 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:14.375 13:33:11 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:14.375 13:33:11 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:14.375 13:33:11 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:14.375 13:33:11 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:14.375 13:33:11 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:14.375 13:33:11 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.375 13:33:11 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.375 13:33:11 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.375 13:33:11 json_config -- paths/export.sh@5 -- # export PATH 00:05:14.375 13:33:11 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.375 13:33:11 json_config -- nvmf/common.sh@47 -- # : 0 00:05:14.376 13:33:11 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:14.376 13:33:11 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:14.376 13:33:11 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:14.376 13:33:11 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:14.376 13:33:11 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:14.376 13:33:11 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:14.376 13:33:11 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:14.376 13:33:11 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:14.376 13:33:11 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:14.376 13:33:11 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:14.376 13:33:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:14.376 13:33:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:14.376 13:33:11 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:14.376 13:33:11 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:14.376 13:33:11 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:14.376 13:33:11 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:14.376 13:33:11 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:14.376 13:33:11 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:14.376 13:33:11 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:14.376 13:33:11 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:14.376 13:33:11 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:14.376 13:33:11 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:14.376 13:33:11 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:14.376 13:33:11 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:14.376 INFO: JSON configuration test init 00:05:14.376 13:33:11 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:14.376 13:33:11 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:14.376 13:33:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:14.376 13:33:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.376 13:33:11 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:14.376 13:33:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:14.376 13:33:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.376 13:33:11 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:14.376 13:33:11 json_config -- json_config/common.sh@9 -- # local app=target 00:05:14.376 13:33:11 json_config -- json_config/common.sh@10 -- # shift 00:05:14.376 13:33:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:14.376 13:33:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:14.376 13:33:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:14.376 13:33:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.376 13:33:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.376 13:33:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=78733 00:05:14.376 13:33:11 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:14.376 13:33:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:14.376 Waiting for target to run... 00:05:14.376 13:33:11 json_config -- json_config/common.sh@25 -- # waitforlisten 78733 /var/tmp/spdk_tgt.sock 00:05:14.376 13:33:11 json_config -- common/autotest_common.sh@831 -- # '[' -z 78733 ']' 00:05:14.376 13:33:11 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:14.376 13:33:11 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:14.376 13:33:11 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:14.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:14.376 13:33:11 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:14.376 13:33:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.376 [2024-07-25 13:33:11.157719] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:14.376 [2024-07-25 13:33:11.157770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78733 ] 00:05:14.376 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.635 [2024-07-25 13:33:11.400018] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:14.635 [2024-07-25 13:33:11.435007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.635 [2024-07-25 13:33:11.456581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.203 13:33:11 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.203 13:33:11 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:15.203 13:33:11 json_config -- json_config/common.sh@26 -- # echo '' 00:05:15.203 00:05:15.203 13:33:11 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:15.203 13:33:11 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:15.203 13:33:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:15.203 13:33:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.203 13:33:11 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:15.203 13:33:11 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:15.203 13:33:11 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:15.203 13:33:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.203 13:33:11 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:15.203 13:33:11 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:15.203 13:33:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:18.494 13:33:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:18.494 13:33:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:18.494 13:33:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@51 -- # sort 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:18.494 13:33:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:18.494 13:33:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:18.494 13:33:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:18.494 13:33:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:18.494 13:33:15 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:18.494 13:33:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:18.755 MallocForNvmf0 00:05:18.755 13:33:15 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:18.755 13:33:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:18.755 MallocForNvmf1 00:05:18.755 13:33:15 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:18.755 13:33:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:19.072 [2024-07-25 13:33:15.777931] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:19.073 13:33:15 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:19.073 13:33:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:19.331 13:33:15 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:19.331 13:33:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:19.331 13:33:16 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:19.331 13:33:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:19.590 13:33:16 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:19.590 13:33:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:19.590 [2024-07-25 13:33:16.415977] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:19.590 13:33:16 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:19.590 13:33:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:19.590 13:33:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.590 13:33:16 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:19.590 13:33:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:19.590 13:33:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.848 13:33:16 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:19.848 13:33:16 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:19.848 13:33:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:19.848 MallocBdevForConfigChangeCheck 00:05:19.848 13:33:16 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:19.848 13:33:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:19.848 13:33:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.848 13:33:16 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:19.848 13:33:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:20.416 13:33:17 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:20.416 INFO: shutting down applications... 00:05:20.416 13:33:17 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:20.416 13:33:17 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:20.416 13:33:17 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:20.416 13:33:17 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:22.321 Calling clear_iscsi_subsystem 00:05:22.321 Calling clear_nvmf_subsystem 00:05:22.321 Calling clear_nbd_subsystem 00:05:22.321 Calling clear_ublk_subsystem 00:05:22.321 Calling clear_vhost_blk_subsystem 00:05:22.321 Calling clear_vhost_scsi_subsystem 00:05:22.321 Calling clear_bdev_subsystem 00:05:22.321 13:33:19 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:22.321 13:33:19 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:22.321 13:33:19 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:22.321 13:33:19 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:22.321 13:33:19 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:22.321 13:33:19 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:22.888 13:33:19 json_config -- json_config/json_config.sh@349 -- # break 00:05:22.888 13:33:19 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:22.888 13:33:19 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:22.888 13:33:19 json_config -- json_config/common.sh@31 -- # local app=target 00:05:22.888 13:33:19 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:22.888 13:33:19 json_config -- json_config/common.sh@35 -- # [[ -n 78733 ]] 00:05:22.888 13:33:19 json_config -- json_config/common.sh@38 -- # kill -SIGINT 78733 00:05:22.888 13:33:19 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:22.888 13:33:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.888 13:33:19 json_config -- json_config/common.sh@41 -- # kill -0 78733 00:05:22.888 13:33:19 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:23.147 13:33:20 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:23.147 13:33:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.147 13:33:20 json_config -- json_config/common.sh@41 -- # kill -0 78733 00:05:23.147 13:33:20 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:23.147 13:33:20 json_config -- json_config/common.sh@43 -- # break 00:05:23.147 13:33:20 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:23.147 13:33:20 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:23.147 SPDK target shutdown done 00:05:23.147 13:33:20 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:23.147 INFO: relaunching applications... 00:05:23.147 13:33:20 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.147 13:33:20 json_config -- json_config/common.sh@9 -- # local app=target 00:05:23.147 13:33:20 json_config -- json_config/common.sh@10 -- # shift 00:05:23.147 13:33:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:23.147 13:33:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:23.147 13:33:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:23.147 13:33:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.147 13:33:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.147 13:33:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=80420 00:05:23.147 13:33:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:23.147 Waiting for target to run... 00:05:23.147 13:33:20 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.147 13:33:20 json_config -- json_config/common.sh@25 -- # waitforlisten 80420 /var/tmp/spdk_tgt.sock 00:05:23.147 13:33:20 json_config -- common/autotest_common.sh@831 -- # '[' -z 80420 ']' 00:05:23.407 13:33:20 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:23.407 13:33:20 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.407 13:33:20 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:23.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:23.407 13:33:20 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.407 13:33:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.407 [2024-07-25 13:33:20.086723] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:23.407 [2024-07-25 13:33:20.086796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80420 ] 00:05:23.407 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.666 [2024-07-25 13:33:20.345458] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:23.666 [2024-07-25 13:33:20.381816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.666 [2024-07-25 13:33:20.404247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.953 [2024-07-25 13:33:23.420682] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:26.953 [2024-07-25 13:33:23.453038] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:26.953 13:33:23 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.953 13:33:23 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:26.953 13:33:23 json_config -- json_config/common.sh@26 -- # echo '' 00:05:26.953 00:05:26.953 13:33:23 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:26.953 13:33:23 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:26.953 INFO: Checking if target configuration is the same... 00:05:26.953 13:33:23 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:26.953 13:33:23 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:26.953 13:33:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:26.953 + '[' 2 -ne 2 ']' 00:05:26.953 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:26.953 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:26.953 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:26.953 +++ basename /dev/fd/62 00:05:26.953 ++ mktemp /tmp/62.XXX 00:05:26.953 + tmp_file_1=/tmp/62.RNR 00:05:26.953 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:26.953 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:26.953 + tmp_file_2=/tmp/spdk_tgt_config.json.Rvu 00:05:26.953 + ret=0 00:05:26.953 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:26.953 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:27.212 + diff -u /tmp/62.RNR /tmp/spdk_tgt_config.json.Rvu 00:05:27.212 + echo 'INFO: JSON config files are the same' 00:05:27.212 INFO: JSON config files are the same 00:05:27.212 + rm /tmp/62.RNR /tmp/spdk_tgt_config.json.Rvu 00:05:27.212 + exit 0 00:05:27.212 13:33:23 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:27.212 13:33:23 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:27.212 INFO: changing configuration and checking if this can be detected... 00:05:27.212 13:33:23 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:27.212 13:33:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:27.212 13:33:24 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:27.212 13:33:24 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:27.212 13:33:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:27.212 + '[' 2 -ne 2 ']' 00:05:27.212 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:27.212 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:27.212 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:27.212 +++ basename /dev/fd/62 00:05:27.212 ++ mktemp /tmp/62.XXX 00:05:27.212 + tmp_file_1=/tmp/62.96c 00:05:27.212 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:27.212 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:27.212 + tmp_file_2=/tmp/spdk_tgt_config.json.MBy 00:05:27.212 + ret=0 00:05:27.212 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:27.471 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:27.730 + diff -u /tmp/62.96c /tmp/spdk_tgt_config.json.MBy 00:05:27.730 + ret=1 00:05:27.730 + echo '=== Start of file: /tmp/62.96c ===' 00:05:27.730 + cat /tmp/62.96c 00:05:27.730 + echo '=== End of file: /tmp/62.96c ===' 00:05:27.730 + echo '' 00:05:27.730 + echo '=== Start of file: /tmp/spdk_tgt_config.json.MBy ===' 00:05:27.730 + cat /tmp/spdk_tgt_config.json.MBy 00:05:27.730 + echo '=== End of file: /tmp/spdk_tgt_config.json.MBy ===' 00:05:27.730 + echo '' 00:05:27.730 + rm /tmp/62.96c /tmp/spdk_tgt_config.json.MBy 00:05:27.730 + exit 1 00:05:27.730 13:33:24 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:27.730 INFO: configuration change detected. 00:05:27.730 13:33:24 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:27.730 13:33:24 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:27.730 13:33:24 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:27.730 13:33:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.730 13:33:24 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:27.730 13:33:24 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:27.730 13:33:24 json_config -- json_config/json_config.sh@321 -- # [[ -n 80420 ]] 00:05:27.730 13:33:24 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:27.730 13:33:24 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:27.730 13:33:24 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:27.730 13:33:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.730 13:33:24 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:27.730 13:33:24 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:27.730 13:33:24 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:27.730 13:33:24 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:27.730 13:33:24 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:27.730 13:33:24 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:27.730 13:33:24 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:27.730 13:33:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.730 13:33:24 json_config -- json_config/json_config.sh@327 -- # killprocess 80420 00:05:27.730 13:33:24 json_config -- common/autotest_common.sh@950 -- # '[' -z 80420 ']' 00:05:27.730 13:33:24 json_config -- common/autotest_common.sh@954 -- # kill -0 80420 00:05:27.730 13:33:24 json_config -- common/autotest_common.sh@955 -- # uname 00:05:27.730 13:33:24 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.730 13:33:24 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80420 00:05:27.730 13:33:24 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:27.730 13:33:24 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:27.730 13:33:24 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80420' 00:05:27.730 killing process with pid 80420 00:05:27.730 13:33:24 json_config -- common/autotest_common.sh@969 -- # kill 80420 00:05:27.730 13:33:24 json_config -- common/autotest_common.sh@974 -- # wait 80420 00:05:30.264 13:33:26 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:30.264 13:33:26 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:30.264 13:33:26 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:30.264 13:33:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.264 13:33:26 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:30.264 13:33:26 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:30.264 INFO: Success 00:05:30.264 00:05:30.264 real 0m15.617s 00:05:30.264 user 0m16.139s 00:05:30.264 sys 0m1.965s 00:05:30.264 13:33:26 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.264 13:33:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.264 ************************************ 00:05:30.264 END TEST json_config 00:05:30.264 ************************************ 00:05:30.264 13:33:26 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:30.264 13:33:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.264 13:33:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.264 13:33:26 -- common/autotest_common.sh@10 -- # set +x 00:05:30.264 ************************************ 00:05:30.264 START TEST json_config_extra_key 00:05:30.264 ************************************ 00:05:30.264 13:33:26 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:30.264 13:33:26 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:30.264 13:33:26 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:30.264 13:33:26 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:30.264 13:33:26 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:30.264 13:33:26 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:30.264 13:33:26 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:30.264 13:33:26 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:30.264 13:33:26 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:30.264 13:33:26 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:30.264 13:33:26 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:30.264 13:33:26 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:30.264 13:33:26 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:30.264 13:33:26 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:05:30.264 13:33:26 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:05:30.264 13:33:26 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:30.264 13:33:26 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:30.264 13:33:26 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:30.264 13:33:26 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:30.264 13:33:26 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:30.264 13:33:26 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:30.264 13:33:26 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:30.264 13:33:26 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:30.264 13:33:26 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.264 13:33:26 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.264 13:33:26 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.264 13:33:26 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:30.264 13:33:26 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.264 13:33:26 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:30.264 13:33:26 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:30.264 13:33:26 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:30.264 13:33:26 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:30.264 13:33:26 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:30.264 13:33:26 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:30.264 13:33:26 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:30.264 13:33:26 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:30.264 13:33:26 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:30.264 13:33:26 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:30.264 13:33:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:30.264 13:33:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:30.264 13:33:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:30.264 13:33:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:30.264 13:33:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:30.265 13:33:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:30.265 13:33:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:30.265 13:33:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:30.265 13:33:26 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:30.265 13:33:26 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:30.265 INFO: launching applications... 00:05:30.265 13:33:26 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:30.265 13:33:26 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:30.265 13:33:26 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:30.265 13:33:26 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:30.265 13:33:26 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:30.265 13:33:26 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:30.265 13:33:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:30.265 13:33:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:30.265 13:33:26 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=81632 00:05:30.265 13:33:26 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:30.265 Waiting for target to run... 00:05:30.265 13:33:26 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 81632 /var/tmp/spdk_tgt.sock 00:05:30.265 13:33:26 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 81632 ']' 00:05:30.265 13:33:26 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:30.265 13:33:26 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:30.265 13:33:26 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.265 13:33:26 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:30.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:30.265 13:33:26 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.265 13:33:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:30.265 [2024-07-25 13:33:26.901162] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:30.265 [2024-07-25 13:33:26.901214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81632 ] 00:05:30.265 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.523 [2024-07-25 13:33:27.297289] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:30.523 [2024-07-25 13:33:27.333736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.523 [2024-07-25 13:33:27.363725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.089 13:33:27 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.090 13:33:27 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:31.090 13:33:27 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:31.090 00:05:31.090 13:33:27 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:31.090 INFO: shutting down applications... 00:05:31.090 13:33:27 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:31.090 13:33:27 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:31.090 13:33:27 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:31.090 13:33:27 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 81632 ]] 00:05:31.090 13:33:27 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 81632 00:05:31.090 13:33:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:31.090 13:33:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:31.090 13:33:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 81632 00:05:31.090 13:33:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:31.349 13:33:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:31.349 13:33:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:31.349 13:33:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 81632 00:05:31.349 13:33:28 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:31.349 13:33:28 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:31.349 13:33:28 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:31.349 13:33:28 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:31.349 SPDK target shutdown done 00:05:31.349 13:33:28 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:31.349 Success 00:05:31.349 00:05:31.349 real 0m1.472s 00:05:31.349 user 0m1.042s 00:05:31.349 sys 0m0.577s 00:05:31.349 13:33:28 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.349 13:33:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:31.349 ************************************ 00:05:31.349 END TEST json_config_extra_key 00:05:31.349 ************************************ 00:05:31.608 13:33:28 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:31.608 13:33:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.608 13:33:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.608 13:33:28 -- common/autotest_common.sh@10 -- # set +x 00:05:31.608 ************************************ 00:05:31.608 START TEST alias_rpc 00:05:31.608 ************************************ 00:05:31.608 13:33:28 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:31.608 * Looking for test storage... 00:05:31.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:31.608 13:33:28 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:31.608 13:33:28 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=81956 00:05:31.608 13:33:28 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 81956 00:05:31.608 13:33:28 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 81956 ']' 00:05:31.608 13:33:28 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.608 13:33:28 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.608 13:33:28 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.608 13:33:28 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.608 13:33:28 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.608 13:33:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.608 [2024-07-25 13:33:28.457534] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:31.608 [2024-07-25 13:33:28.457591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81956 ] 00:05:31.608 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.868 [2024-07-25 13:33:28.494706] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:31.868 [2024-07-25 13:33:28.528381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.868 [2024-07-25 13:33:28.568343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.435 13:33:29 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.435 13:33:29 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:32.435 13:33:29 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:32.694 13:33:29 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 81956 00:05:32.694 13:33:29 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 81956 ']' 00:05:32.694 13:33:29 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 81956 00:05:32.694 13:33:29 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:32.694 13:33:29 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.694 13:33:29 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81956 00:05:32.694 13:33:29 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.694 13:33:29 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.694 13:33:29 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81956' 00:05:32.694 killing process with pid 81956 00:05:32.694 13:33:29 alias_rpc -- common/autotest_common.sh@969 -- # kill 81956 00:05:32.694 13:33:29 alias_rpc -- common/autotest_common.sh@974 -- # wait 81956 00:05:32.952 00:05:32.952 real 0m1.474s 00:05:32.952 user 0m1.552s 00:05:32.952 sys 0m0.447s 00:05:32.952 13:33:29 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.952 13:33:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.952 ************************************ 00:05:32.952 END TEST alias_rpc 00:05:32.952 ************************************ 00:05:32.952 13:33:29 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:32.952 13:33:29 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:32.952 13:33:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.952 13:33:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.952 13:33:29 -- common/autotest_common.sh@10 -- # set +x 00:05:33.211 ************************************ 00:05:33.211 START TEST spdkcli_tcp 00:05:33.211 ************************************ 00:05:33.211 13:33:29 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:33.211 * Looking for test storage... 00:05:33.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:33.211 13:33:29 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:33.211 13:33:29 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:33.211 13:33:29 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:33.211 13:33:29 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:33.211 13:33:29 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:33.211 13:33:29 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:33.211 13:33:29 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:33.211 13:33:29 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:33.211 13:33:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:33.211 13:33:29 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=82271 00:05:33.211 13:33:29 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 82271 00:05:33.211 13:33:29 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:33.211 13:33:29 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 82271 ']' 00:05:33.211 13:33:29 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.211 13:33:29 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.211 13:33:29 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.211 13:33:29 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.211 13:33:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:33.211 [2024-07-25 13:33:30.018018] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:33.211 [2024-07-25 13:33:30.018076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82271 ] 00:05:33.211 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.211 [2024-07-25 13:33:30.054443] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:33.211 [2024-07-25 13:33:30.089322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.470 [2024-07-25 13:33:30.129636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.470 [2024-07-25 13:33:30.129640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.037 13:33:30 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.037 13:33:30 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:34.037 13:33:30 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:34.037 13:33:30 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=82512 00:05:34.037 13:33:30 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:34.296 [ 00:05:34.296 "bdev_malloc_delete", 00:05:34.296 "bdev_malloc_create", 00:05:34.296 "bdev_null_resize", 00:05:34.296 "bdev_null_delete", 00:05:34.296 "bdev_null_create", 00:05:34.296 "bdev_nvme_cuse_unregister", 00:05:34.296 "bdev_nvme_cuse_register", 00:05:34.296 "bdev_opal_new_user", 00:05:34.296 "bdev_opal_set_lock_state", 00:05:34.296 "bdev_opal_delete", 00:05:34.296 "bdev_opal_get_info", 00:05:34.296 "bdev_opal_create", 00:05:34.296 "bdev_nvme_opal_revert", 00:05:34.296 "bdev_nvme_opal_init", 00:05:34.296 "bdev_nvme_send_cmd", 00:05:34.296 "bdev_nvme_get_path_iostat", 00:05:34.296 "bdev_nvme_get_mdns_discovery_info", 00:05:34.296 "bdev_nvme_stop_mdns_discovery", 00:05:34.296 "bdev_nvme_start_mdns_discovery", 00:05:34.296 "bdev_nvme_set_multipath_policy", 00:05:34.296 "bdev_nvme_set_preferred_path", 00:05:34.296 "bdev_nvme_get_io_paths", 00:05:34.296 "bdev_nvme_remove_error_injection", 00:05:34.296 "bdev_nvme_add_error_injection", 00:05:34.296 "bdev_nvme_get_discovery_info", 00:05:34.296 "bdev_nvme_stop_discovery", 00:05:34.296 "bdev_nvme_start_discovery", 00:05:34.296 "bdev_nvme_get_controller_health_info", 00:05:34.296 "bdev_nvme_disable_controller", 00:05:34.296 "bdev_nvme_enable_controller", 00:05:34.296 "bdev_nvme_reset_controller", 00:05:34.296 "bdev_nvme_get_transport_statistics", 00:05:34.296 "bdev_nvme_apply_firmware", 00:05:34.296 "bdev_nvme_detach_controller", 00:05:34.296 "bdev_nvme_get_controllers", 00:05:34.296 "bdev_nvme_attach_controller", 00:05:34.296 "bdev_nvme_set_hotplug", 00:05:34.296 "bdev_nvme_set_options", 00:05:34.296 "bdev_passthru_delete", 00:05:34.296 "bdev_passthru_create", 00:05:34.296 "bdev_lvol_set_parent_bdev", 00:05:34.296 "bdev_lvol_set_parent", 00:05:34.296 "bdev_lvol_check_shallow_copy", 00:05:34.296 "bdev_lvol_start_shallow_copy", 00:05:34.296 "bdev_lvol_grow_lvstore", 00:05:34.296 "bdev_lvol_get_lvols", 00:05:34.296 "bdev_lvol_get_lvstores", 00:05:34.296 "bdev_lvol_delete", 00:05:34.296 "bdev_lvol_set_read_only", 00:05:34.296 "bdev_lvol_resize", 00:05:34.296 "bdev_lvol_decouple_parent", 00:05:34.296 "bdev_lvol_inflate", 00:05:34.296 "bdev_lvol_rename", 00:05:34.296 "bdev_lvol_clone_bdev", 00:05:34.296 "bdev_lvol_clone", 00:05:34.296 "bdev_lvol_snapshot", 00:05:34.296 "bdev_lvol_create", 00:05:34.296 "bdev_lvol_delete_lvstore", 00:05:34.296 "bdev_lvol_rename_lvstore", 00:05:34.296 "bdev_lvol_create_lvstore", 00:05:34.296 "bdev_raid_set_options", 00:05:34.297 "bdev_raid_remove_base_bdev", 00:05:34.297 "bdev_raid_add_base_bdev", 00:05:34.297 "bdev_raid_delete", 00:05:34.297 "bdev_raid_create", 00:05:34.297 "bdev_raid_get_bdevs", 00:05:34.297 "bdev_error_inject_error", 00:05:34.297 "bdev_error_delete", 00:05:34.297 "bdev_error_create", 00:05:34.297 "bdev_split_delete", 00:05:34.297 "bdev_split_create", 00:05:34.297 "bdev_delay_delete", 00:05:34.297 "bdev_delay_create", 00:05:34.297 "bdev_delay_update_latency", 00:05:34.297 "bdev_zone_block_delete", 00:05:34.297 "bdev_zone_block_create", 00:05:34.297 "blobfs_create", 00:05:34.297 "blobfs_detect", 00:05:34.297 "blobfs_set_cache_size", 00:05:34.297 "bdev_aio_delete", 00:05:34.297 "bdev_aio_rescan", 00:05:34.297 "bdev_aio_create", 00:05:34.297 "bdev_ftl_set_property", 00:05:34.297 "bdev_ftl_get_properties", 00:05:34.297 "bdev_ftl_get_stats", 00:05:34.297 "bdev_ftl_unmap", 00:05:34.297 "bdev_ftl_unload", 00:05:34.297 "bdev_ftl_delete", 00:05:34.297 "bdev_ftl_load", 00:05:34.297 "bdev_ftl_create", 00:05:34.297 "bdev_virtio_attach_controller", 00:05:34.297 "bdev_virtio_scsi_get_devices", 00:05:34.297 "bdev_virtio_detach_controller", 00:05:34.297 "bdev_virtio_blk_set_hotplug", 00:05:34.297 "bdev_iscsi_delete", 00:05:34.297 "bdev_iscsi_create", 00:05:34.297 "bdev_iscsi_set_options", 00:05:34.297 "accel_error_inject_error", 00:05:34.297 "ioat_scan_accel_module", 00:05:34.297 "dsa_scan_accel_module", 00:05:34.297 "iaa_scan_accel_module", 00:05:34.297 "vfu_virtio_create_scsi_endpoint", 00:05:34.297 "vfu_virtio_scsi_remove_target", 00:05:34.297 "vfu_virtio_scsi_add_target", 00:05:34.297 "vfu_virtio_create_blk_endpoint", 00:05:34.297 "vfu_virtio_delete_endpoint", 00:05:34.297 "keyring_file_remove_key", 00:05:34.297 "keyring_file_add_key", 00:05:34.297 "keyring_linux_set_options", 00:05:34.297 "iscsi_get_histogram", 00:05:34.297 "iscsi_enable_histogram", 00:05:34.297 "iscsi_set_options", 00:05:34.297 "iscsi_get_auth_groups", 00:05:34.297 "iscsi_auth_group_remove_secret", 00:05:34.297 "iscsi_auth_group_add_secret", 00:05:34.297 "iscsi_delete_auth_group", 00:05:34.297 "iscsi_create_auth_group", 00:05:34.297 "iscsi_set_discovery_auth", 00:05:34.297 "iscsi_get_options", 00:05:34.297 "iscsi_target_node_request_logout", 00:05:34.297 "iscsi_target_node_set_redirect", 00:05:34.297 "iscsi_target_node_set_auth", 00:05:34.297 "iscsi_target_node_add_lun", 00:05:34.297 "iscsi_get_stats", 00:05:34.297 "iscsi_get_connections", 00:05:34.297 "iscsi_portal_group_set_auth", 00:05:34.297 "iscsi_start_portal_group", 00:05:34.297 "iscsi_delete_portal_group", 00:05:34.297 "iscsi_create_portal_group", 00:05:34.297 "iscsi_get_portal_groups", 00:05:34.297 "iscsi_delete_target_node", 00:05:34.297 "iscsi_target_node_remove_pg_ig_maps", 00:05:34.297 "iscsi_target_node_add_pg_ig_maps", 00:05:34.297 "iscsi_create_target_node", 00:05:34.297 "iscsi_get_target_nodes", 00:05:34.297 "iscsi_delete_initiator_group", 00:05:34.297 "iscsi_initiator_group_remove_initiators", 00:05:34.297 "iscsi_initiator_group_add_initiators", 00:05:34.297 "iscsi_create_initiator_group", 00:05:34.297 "iscsi_get_initiator_groups", 00:05:34.297 "nvmf_set_crdt", 00:05:34.297 "nvmf_set_config", 00:05:34.297 "nvmf_set_max_subsystems", 00:05:34.297 "nvmf_stop_mdns_prr", 00:05:34.297 "nvmf_publish_mdns_prr", 00:05:34.297 "nvmf_subsystem_get_listeners", 00:05:34.297 "nvmf_subsystem_get_qpairs", 00:05:34.297 "nvmf_subsystem_get_controllers", 00:05:34.297 "nvmf_get_stats", 00:05:34.297 "nvmf_get_transports", 00:05:34.297 "nvmf_create_transport", 00:05:34.297 "nvmf_get_targets", 00:05:34.297 "nvmf_delete_target", 00:05:34.297 "nvmf_create_target", 00:05:34.297 "nvmf_subsystem_allow_any_host", 00:05:34.297 "nvmf_subsystem_remove_host", 00:05:34.297 "nvmf_subsystem_add_host", 00:05:34.297 "nvmf_ns_remove_host", 00:05:34.297 "nvmf_ns_add_host", 00:05:34.297 "nvmf_subsystem_remove_ns", 00:05:34.297 "nvmf_subsystem_add_ns", 00:05:34.297 "nvmf_subsystem_listener_set_ana_state", 00:05:34.297 "nvmf_discovery_get_referrals", 00:05:34.297 "nvmf_discovery_remove_referral", 00:05:34.297 "nvmf_discovery_add_referral", 00:05:34.297 "nvmf_subsystem_remove_listener", 00:05:34.297 "nvmf_subsystem_add_listener", 00:05:34.297 "nvmf_delete_subsystem", 00:05:34.297 "nvmf_create_subsystem", 00:05:34.297 "nvmf_get_subsystems", 00:05:34.297 "env_dpdk_get_mem_stats", 00:05:34.297 "nbd_get_disks", 00:05:34.297 "nbd_stop_disk", 00:05:34.297 "nbd_start_disk", 00:05:34.297 "ublk_recover_disk", 00:05:34.297 "ublk_get_disks", 00:05:34.297 "ublk_stop_disk", 00:05:34.297 "ublk_start_disk", 00:05:34.297 "ublk_destroy_target", 00:05:34.297 "ublk_create_target", 00:05:34.297 "virtio_blk_create_transport", 00:05:34.297 "virtio_blk_get_transports", 00:05:34.297 "vhost_controller_set_coalescing", 00:05:34.297 "vhost_get_controllers", 00:05:34.297 "vhost_delete_controller", 00:05:34.297 "vhost_create_blk_controller", 00:05:34.297 "vhost_scsi_controller_remove_target", 00:05:34.297 "vhost_scsi_controller_add_target", 00:05:34.297 "vhost_start_scsi_controller", 00:05:34.297 "vhost_create_scsi_controller", 00:05:34.297 "thread_set_cpumask", 00:05:34.297 "framework_get_governor", 00:05:34.297 "framework_get_scheduler", 00:05:34.297 "framework_set_scheduler", 00:05:34.297 "framework_get_reactors", 00:05:34.297 "thread_get_io_channels", 00:05:34.297 "thread_get_pollers", 00:05:34.297 "thread_get_stats", 00:05:34.297 "framework_monitor_context_switch", 00:05:34.297 "spdk_kill_instance", 00:05:34.297 "log_enable_timestamps", 00:05:34.297 "log_get_flags", 00:05:34.297 "log_clear_flag", 00:05:34.297 "log_set_flag", 00:05:34.297 "log_get_level", 00:05:34.297 "log_set_level", 00:05:34.297 "log_get_print_level", 00:05:34.297 "log_set_print_level", 00:05:34.297 "framework_enable_cpumask_locks", 00:05:34.297 "framework_disable_cpumask_locks", 00:05:34.297 "framework_wait_init", 00:05:34.297 "framework_start_init", 00:05:34.297 "scsi_get_devices", 00:05:34.297 "bdev_get_histogram", 00:05:34.297 "bdev_enable_histogram", 00:05:34.297 "bdev_set_qos_limit", 00:05:34.297 "bdev_set_qd_sampling_period", 00:05:34.297 "bdev_get_bdevs", 00:05:34.297 "bdev_reset_iostat", 00:05:34.297 "bdev_get_iostat", 00:05:34.297 "bdev_examine", 00:05:34.297 "bdev_wait_for_examine", 00:05:34.297 "bdev_set_options", 00:05:34.297 "notify_get_notifications", 00:05:34.297 "notify_get_types", 00:05:34.297 "accel_get_stats", 00:05:34.297 "accel_set_options", 00:05:34.297 "accel_set_driver", 00:05:34.297 "accel_crypto_key_destroy", 00:05:34.297 "accel_crypto_keys_get", 00:05:34.297 "accel_crypto_key_create", 00:05:34.297 "accel_assign_opc", 00:05:34.297 "accel_get_module_info", 00:05:34.297 "accel_get_opc_assignments", 00:05:34.297 "vmd_rescan", 00:05:34.297 "vmd_remove_device", 00:05:34.297 "vmd_enable", 00:05:34.297 "sock_get_default_impl", 00:05:34.297 "sock_set_default_impl", 00:05:34.297 "sock_impl_set_options", 00:05:34.297 "sock_impl_get_options", 00:05:34.297 "iobuf_get_stats", 00:05:34.297 "iobuf_set_options", 00:05:34.297 "keyring_get_keys", 00:05:34.297 "framework_get_pci_devices", 00:05:34.297 "framework_get_config", 00:05:34.297 "framework_get_subsystems", 00:05:34.297 "vfu_tgt_set_base_path", 00:05:34.297 "trace_get_info", 00:05:34.297 "trace_get_tpoint_group_mask", 00:05:34.297 "trace_disable_tpoint_group", 00:05:34.297 "trace_enable_tpoint_group", 00:05:34.297 "trace_clear_tpoint_mask", 00:05:34.297 "trace_set_tpoint_mask", 00:05:34.297 "spdk_get_version", 00:05:34.297 "rpc_get_methods" 00:05:34.297 ] 00:05:34.297 13:33:30 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:34.297 13:33:30 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:34.297 13:33:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:34.297 13:33:31 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:34.297 13:33:31 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 82271 00:05:34.297 13:33:31 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 82271 ']' 00:05:34.297 13:33:31 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 82271 00:05:34.297 13:33:31 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:34.297 13:33:31 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:34.297 13:33:31 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82271 00:05:34.297 13:33:31 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:34.297 13:33:31 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:34.297 13:33:31 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82271' 00:05:34.297 killing process with pid 82271 00:05:34.297 13:33:31 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 82271 00:05:34.297 13:33:31 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 82271 00:05:34.557 00:05:34.557 real 0m1.547s 00:05:34.557 user 0m2.847s 00:05:34.557 sys 0m0.506s 00:05:34.557 13:33:31 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.557 13:33:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:34.557 ************************************ 00:05:34.557 END TEST spdkcli_tcp 00:05:34.557 ************************************ 00:05:34.557 13:33:31 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:34.557 13:33:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.557 13:33:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.557 13:33:31 -- common/autotest_common.sh@10 -- # set +x 00:05:34.816 ************************************ 00:05:34.816 START TEST dpdk_mem_utility 00:05:34.816 ************************************ 00:05:34.816 13:33:31 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:34.816 * Looking for test storage... 00:05:34.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:34.816 13:33:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:34.816 13:33:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=82613 00:05:34.816 13:33:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.816 13:33:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 82613 00:05:34.816 13:33:31 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 82613 ']' 00:05:34.816 13:33:31 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.816 13:33:31 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:34.816 13:33:31 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.816 13:33:31 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:34.816 13:33:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:34.816 [2024-07-25 13:33:31.622048] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:34.816 [2024-07-25 13:33:31.622102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82613 ] 00:05:34.816 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.816 [2024-07-25 13:33:31.657207] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:34.816 [2024-07-25 13:33:31.692350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.075 [2024-07-25 13:33:31.730807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.643 13:33:32 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:35.643 13:33:32 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:35.643 13:33:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:35.643 13:33:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:35.643 13:33:32 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.643 13:33:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:35.643 { 00:05:35.643 "filename": "/tmp/spdk_mem_dump.txt" 00:05:35.643 } 00:05:35.643 13:33:32 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.643 13:33:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:35.643 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:35.643 1 heaps totaling size 814.000000 MiB 00:05:35.643 size: 814.000000 MiB heap id: 0 00:05:35.643 end heaps---------- 00:05:35.643 8 mempools totaling size 598.116089 MiB 00:05:35.643 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:35.643 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:35.643 size: 84.521057 MiB name: bdev_io_82613 00:05:35.643 size: 51.011292 MiB name: evtpool_82613 00:05:35.643 size: 50.003479 MiB name: msgpool_82613 00:05:35.643 size: 21.763794 MiB name: PDU_Pool 00:05:35.643 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:35.643 size: 0.026123 MiB name: Session_Pool 00:05:35.643 end mempools------- 00:05:35.643 6 memzones totaling size 4.142822 MiB 00:05:35.643 size: 1.000366 MiB name: RG_ring_0_82613 00:05:35.643 size: 1.000366 MiB name: RG_ring_1_82613 00:05:35.643 size: 1.000366 MiB name: RG_ring_4_82613 00:05:35.643 size: 1.000366 MiB name: RG_ring_5_82613 00:05:35.643 size: 0.125366 MiB name: RG_ring_2_82613 00:05:35.643 size: 0.015991 MiB name: RG_ring_3_82613 00:05:35.643 end memzones------- 00:05:35.643 13:33:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:35.643 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:35.643 list of free elements. size: 12.519348 MiB 00:05:35.643 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:35.643 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:35.643 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:35.643 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:35.643 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:35.643 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:35.643 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:35.643 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:35.643 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:35.643 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:35.643 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:35.643 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:35.643 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:35.643 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:35.643 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:35.643 list of standard malloc elements. size: 199.218079 MiB 00:05:35.643 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:35.643 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:35.643 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:35.643 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:35.643 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:35.643 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:35.643 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:35.643 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:35.643 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:35.643 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:35.643 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:35.643 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:35.643 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:35.643 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:35.643 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:35.643 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:35.643 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:35.643 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:35.643 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:35.643 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:35.643 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:35.643 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:35.643 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:35.643 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:35.643 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:35.643 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:35.643 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:35.643 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:35.643 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:35.643 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:35.643 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:35.643 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:35.643 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:35.643 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:35.643 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:35.643 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:35.643 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:35.643 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:35.643 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:35.643 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:35.643 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:35.643 list of memzone associated elements. size: 602.262573 MiB 00:05:35.643 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:35.643 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:35.643 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:35.643 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:35.643 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:35.643 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_82613_0 00:05:35.643 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:35.643 associated memzone info: size: 48.002930 MiB name: MP_evtpool_82613_0 00:05:35.643 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:35.643 associated memzone info: size: 48.002930 MiB name: MP_msgpool_82613_0 00:05:35.643 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:35.643 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:35.643 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:35.643 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:35.643 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:35.643 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_82613 00:05:35.643 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:35.643 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_82613 00:05:35.643 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:35.643 associated memzone info: size: 1.007996 MiB name: MP_evtpool_82613 00:05:35.643 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:35.643 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:35.643 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:35.643 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:35.643 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:35.643 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:35.643 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:35.643 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:35.643 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:35.643 associated memzone info: size: 1.000366 MiB name: RG_ring_0_82613 00:05:35.643 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:35.644 associated memzone info: size: 1.000366 MiB name: RG_ring_1_82613 00:05:35.644 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:35.644 associated memzone info: size: 1.000366 MiB name: RG_ring_4_82613 00:05:35.644 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:35.644 associated memzone info: size: 1.000366 MiB name: RG_ring_5_82613 00:05:35.644 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:35.644 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_82613 00:05:35.644 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:35.644 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:35.644 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:35.644 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:35.644 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:35.644 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:35.644 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:35.644 associated memzone info: size: 0.125366 MiB name: RG_ring_2_82613 00:05:35.644 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:35.644 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:35.644 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:35.644 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:35.644 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:35.644 associated memzone info: size: 0.015991 MiB name: RG_ring_3_82613 00:05:35.644 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:35.644 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:35.644 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:35.644 associated memzone info: size: 0.000183 MiB name: MP_msgpool_82613 00:05:35.644 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:35.644 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_82613 00:05:35.644 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:35.644 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:35.644 13:33:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:35.644 13:33:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 82613 00:05:35.644 13:33:32 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 82613 ']' 00:05:35.644 13:33:32 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 82613 00:05:35.644 13:33:32 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:35.644 13:33:32 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:35.903 13:33:32 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82613 00:05:35.903 13:33:32 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:35.903 13:33:32 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:35.903 13:33:32 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82613' 00:05:35.903 killing process with pid 82613 00:05:35.903 13:33:32 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 82613 00:05:35.903 13:33:32 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 82613 00:05:36.183 00:05:36.183 real 0m1.398s 00:05:36.183 user 0m1.435s 00:05:36.183 sys 0m0.444s 00:05:36.183 13:33:32 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.183 13:33:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:36.183 ************************************ 00:05:36.183 END TEST dpdk_mem_utility 00:05:36.183 ************************************ 00:05:36.183 13:33:32 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:36.183 13:33:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.183 13:33:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.183 13:33:32 -- common/autotest_common.sh@10 -- # set +x 00:05:36.183 ************************************ 00:05:36.183 START TEST event 00:05:36.183 ************************************ 00:05:36.183 13:33:32 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:36.183 * Looking for test storage... 00:05:36.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:36.183 13:33:33 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:36.183 13:33:33 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:36.183 13:33:33 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:36.183 13:33:33 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:36.183 13:33:33 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.183 13:33:33 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.458 ************************************ 00:05:36.458 START TEST event_perf 00:05:36.458 ************************************ 00:05:36.458 13:33:33 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:36.458 Running I/O for 1 seconds...[2024-07-25 13:33:33.094471] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:36.458 [2024-07-25 13:33:33.094549] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82929 ] 00:05:36.458 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.458 [2024-07-25 13:33:33.134122] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:36.458 [2024-07-25 13:33:33.168273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.458 [2024-07-25 13:33:33.209318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.458 [2024-07-25 13:33:33.209336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.458 [2024-07-25 13:33:33.209423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.458 [2024-07-25 13:33:33.209425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.396 Running I/O for 1 seconds... 00:05:37.396 lcore 0: 215775 00:05:37.396 lcore 1: 215774 00:05:37.396 lcore 2: 215776 00:05:37.396 lcore 3: 215775 00:05:37.396 done. 00:05:37.396 00:05:37.396 real 0m1.197s 00:05:37.396 user 0m4.101s 00:05:37.396 sys 0m0.092s 00:05:37.396 13:33:34 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.396 13:33:34 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:37.396 ************************************ 00:05:37.396 END TEST event_perf 00:05:37.396 ************************************ 00:05:37.655 13:33:34 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:37.655 13:33:34 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:37.655 13:33:34 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.655 13:33:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.655 ************************************ 00:05:37.655 START TEST event_reactor 00:05:37.655 ************************************ 00:05:37.655 13:33:34 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:37.655 [2024-07-25 13:33:34.361699] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:37.655 [2024-07-25 13:33:34.361793] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83223 ] 00:05:37.655 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.655 [2024-07-25 13:33:34.398372] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:37.655 [2024-07-25 13:33:34.432603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.655 [2024-07-25 13:33:34.470206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.033 test_start 00:05:39.033 oneshot 00:05:39.033 tick 100 00:05:39.033 tick 100 00:05:39.033 tick 250 00:05:39.033 tick 100 00:05:39.033 tick 100 00:05:39.033 tick 100 00:05:39.033 tick 250 00:05:39.033 tick 500 00:05:39.033 tick 100 00:05:39.033 tick 100 00:05:39.033 tick 250 00:05:39.033 tick 100 00:05:39.033 tick 100 00:05:39.033 test_end 00:05:39.033 00:05:39.033 real 0m1.186s 00:05:39.033 user 0m1.096s 00:05:39.033 sys 0m0.086s 00:05:39.033 13:33:35 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.033 13:33:35 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:39.033 ************************************ 00:05:39.033 END TEST event_reactor 00:05:39.033 ************************************ 00:05:39.033 13:33:35 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:39.033 13:33:35 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:39.033 13:33:35 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.033 13:33:35 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.033 ************************************ 00:05:39.033 START TEST event_reactor_perf 00:05:39.033 ************************************ 00:05:39.033 13:33:35 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:39.033 [2024-07-25 13:33:35.607405] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:39.033 [2024-07-25 13:33:35.607472] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83501 ] 00:05:39.033 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.033 [2024-07-25 13:33:35.646391] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:39.033 [2024-07-25 13:33:35.679650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.033 [2024-07-25 13:33:35.716777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.970 test_start 00:05:39.970 test_end 00:05:39.970 Performance: 537040 events per second 00:05:39.970 00:05:39.970 real 0m1.187s 00:05:39.970 user 0m1.100s 00:05:39.970 sys 0m0.083s 00:05:39.970 13:33:36 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.970 13:33:36 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:39.970 ************************************ 00:05:39.970 END TEST event_reactor_perf 00:05:39.970 ************************************ 00:05:39.970 13:33:36 event -- event/event.sh@49 -- # uname -s 00:05:39.970 13:33:36 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:39.970 13:33:36 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:39.970 13:33:36 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.970 13:33:36 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.970 13:33:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.970 ************************************ 00:05:39.970 START TEST event_scheduler 00:05:39.970 ************************************ 00:05:39.970 13:33:36 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:40.229 * Looking for test storage... 00:05:40.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:40.229 13:33:36 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:40.229 13:33:36 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=83750 00:05:40.229 13:33:36 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.229 13:33:36 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 83750 00:05:40.229 13:33:36 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 83750 ']' 00:05:40.229 13:33:36 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.229 13:33:36 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.229 13:33:36 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.229 13:33:36 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.229 13:33:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:40.229 13:33:36 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:40.229 [2024-07-25 13:33:36.981066] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:40.229 [2024-07-25 13:33:36.981123] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83750 ] 00:05:40.229 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.229 [2024-07-25 13:33:37.021349] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:40.229 [2024-07-25 13:33:37.052789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:40.229 [2024-07-25 13:33:37.094918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.229 [2024-07-25 13:33:37.095001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.229 [2024-07-25 13:33:37.095087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:40.229 [2024-07-25 13:33:37.095090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.168 13:33:37 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.168 13:33:37 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:41.168 13:33:37 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:41.168 13:33:37 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.168 13:33:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:41.168 [2024-07-25 13:33:37.789526] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:41.168 [2024-07-25 13:33:37.789547] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:41.168 [2024-07-25 13:33:37.789557] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:41.168 [2024-07-25 13:33:37.789565] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:41.168 [2024-07-25 13:33:37.789572] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:41.168 13:33:37 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.168 13:33:37 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:41.168 13:33:37 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.168 13:33:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:41.168 [2024-07-25 13:33:37.856862] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:41.168 13:33:37 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.168 13:33:37 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:41.168 13:33:37 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.168 13:33:37 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.168 13:33:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:41.168 ************************************ 00:05:41.168 START TEST scheduler_create_thread 00:05:41.168 ************************************ 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.168 2 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.168 3 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.168 4 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.168 5 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.168 6 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.168 7 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.168 8 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.168 9 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.168 10 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.168 13:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.168 13:33:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.168 13:33:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:41.168 13:33:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.168 13:33:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.072 13:33:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.072 13:33:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:43.072 13:33:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:43.072 13:33:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.072 13:33:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.638 13:33:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.638 00:05:43.638 real 0m2.618s 00:05:43.638 user 0m0.025s 00:05:43.638 sys 0m0.005s 00:05:43.638 13:33:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.638 13:33:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.638 ************************************ 00:05:43.638 END TEST scheduler_create_thread 00:05:43.638 ************************************ 00:05:43.897 13:33:40 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:43.897 13:33:40 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 83750 00:05:43.897 13:33:40 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 83750 ']' 00:05:43.897 13:33:40 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 83750 00:05:43.897 13:33:40 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:43.897 13:33:40 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.897 13:33:40 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83750 00:05:43.897 13:33:40 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:43.897 13:33:40 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:43.897 13:33:40 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83750' 00:05:43.897 killing process with pid 83750 00:05:43.897 13:33:40 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 83750 00:05:43.897 13:33:40 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 83750 00:05:44.156 [2024-07-25 13:33:40.999024] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:44.415 00:05:44.415 real 0m4.332s 00:05:44.415 user 0m8.214s 00:05:44.415 sys 0m0.437s 00:05:44.415 13:33:41 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.415 13:33:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.416 ************************************ 00:05:44.416 END TEST event_scheduler 00:05:44.416 ************************************ 00:05:44.416 13:33:41 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:44.416 13:33:41 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:44.416 13:33:41 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.416 13:33:41 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.416 13:33:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.416 ************************************ 00:05:44.416 START TEST app_repeat 00:05:44.416 ************************************ 00:05:44.416 13:33:41 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:44.416 13:33:41 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.416 13:33:41 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.416 13:33:41 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:44.416 13:33:41 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.416 13:33:41 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:44.416 13:33:41 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:44.416 13:33:41 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:44.416 13:33:41 event.app_repeat -- event/event.sh@19 -- # repeat_pid=84444 00:05:44.416 13:33:41 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.416 13:33:41 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 84444' 00:05:44.416 Process app_repeat pid: 84444 00:05:44.416 13:33:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:44.416 13:33:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:44.416 spdk_app_start Round 0 00:05:44.416 13:33:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 84444 /var/tmp/spdk-nbd.sock 00:05:44.416 13:33:41 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 84444 ']' 00:05:44.416 13:33:41 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.416 13:33:41 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.416 13:33:41 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.416 13:33:41 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.416 13:33:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.416 13:33:41 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:44.416 [2024-07-25 13:33:41.299959] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:44.416 [2024-07-25 13:33:41.300017] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84444 ] 00:05:44.675 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.675 [2024-07-25 13:33:41.337711] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:44.675 [2024-07-25 13:33:41.372185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.675 [2024-07-25 13:33:41.412288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.675 [2024-07-25 13:33:41.412291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.675 13:33:41 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.675 13:33:41 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:44.675 13:33:41 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.934 Malloc0 00:05:44.934 13:33:41 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.193 Malloc1 00:05:45.193 13:33:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.193 13:33:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.193 13:33:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.193 13:33:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:45.193 13:33:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.193 13:33:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:45.193 13:33:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.193 13:33:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.193 13:33:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.193 13:33:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:45.193 13:33:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.193 13:33:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:45.193 13:33:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:45.193 13:33:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:45.193 13:33:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.193 13:33:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:45.193 /dev/nbd0 00:05:45.193 13:33:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:45.193 13:33:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:45.193 13:33:42 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:45.193 13:33:42 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:45.193 13:33:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:45.193 13:33:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:45.193 13:33:42 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:45.193 13:33:42 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:45.193 13:33:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:45.193 13:33:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:45.193 13:33:42 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.193 1+0 records in 00:05:45.193 1+0 records out 00:05:45.193 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228242 s, 17.9 MB/s 00:05:45.193 13:33:42 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.193 13:33:42 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:45.193 13:33:42 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.193 13:33:42 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:45.193 13:33:42 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:45.193 13:33:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.193 13:33:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.193 13:33:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:45.453 /dev/nbd1 00:05:45.453 13:33:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:45.453 13:33:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:45.453 13:33:42 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:45.453 13:33:42 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:45.453 13:33:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:45.453 13:33:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:45.453 13:33:42 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:45.453 13:33:42 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:45.453 13:33:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:45.453 13:33:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:45.453 13:33:42 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.453 1+0 records in 00:05:45.453 1+0 records out 00:05:45.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261564 s, 15.7 MB/s 00:05:45.453 13:33:42 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.453 13:33:42 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:45.453 13:33:42 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.453 13:33:42 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:45.453 13:33:42 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:45.453 13:33:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.453 13:33:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.453 13:33:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.453 13:33:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.453 13:33:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:45.713 { 00:05:45.713 "nbd_device": "/dev/nbd0", 00:05:45.713 "bdev_name": "Malloc0" 00:05:45.713 }, 00:05:45.713 { 00:05:45.713 "nbd_device": "/dev/nbd1", 00:05:45.713 "bdev_name": "Malloc1" 00:05:45.713 } 00:05:45.713 ]' 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:45.713 { 00:05:45.713 "nbd_device": "/dev/nbd0", 00:05:45.713 "bdev_name": "Malloc0" 00:05:45.713 }, 00:05:45.713 { 00:05:45.713 "nbd_device": "/dev/nbd1", 00:05:45.713 "bdev_name": "Malloc1" 00:05:45.713 } 00:05:45.713 ]' 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:45.713 /dev/nbd1' 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:45.713 /dev/nbd1' 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:45.713 256+0 records in 00:05:45.713 256+0 records out 00:05:45.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113819 s, 92.1 MB/s 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:45.713 256+0 records in 00:05:45.713 256+0 records out 00:05:45.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019656 s, 53.3 MB/s 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:45.713 256+0 records in 00:05:45.713 256+0 records out 00:05:45.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209184 s, 50.1 MB/s 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.713 13:33:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:45.972 13:33:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:45.972 13:33:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:45.972 13:33:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:45.972 13:33:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.972 13:33:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.972 13:33:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:45.972 13:33:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.972 13:33:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.972 13:33:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.972 13:33:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:46.231 13:33:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:46.231 13:33:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:46.231 13:33:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:46.231 13:33:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.231 13:33:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.231 13:33:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:46.231 13:33:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.231 13:33:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.231 13:33:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.231 13:33:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.231 13:33:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.490 13:33:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:46.490 13:33:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:46.490 13:33:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.490 13:33:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:46.490 13:33:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:46.490 13:33:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.491 13:33:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:46.491 13:33:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:46.491 13:33:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:46.491 13:33:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:46.491 13:33:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:46.491 13:33:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:46.491 13:33:43 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:46.491 13:33:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:46.749 [2024-07-25 13:33:43.542310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.749 [2024-07-25 13:33:43.576918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.749 [2024-07-25 13:33:43.576921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.749 [2024-07-25 13:33:43.617332] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:46.749 [2024-07-25 13:33:43.617374] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:50.037 13:33:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:50.037 13:33:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:50.037 spdk_app_start Round 1 00:05:50.037 13:33:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 84444 /var/tmp/spdk-nbd.sock 00:05:50.037 13:33:46 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 84444 ']' 00:05:50.037 13:33:46 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.037 13:33:46 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.037 13:33:46 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.037 13:33:46 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.037 13:33:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.037 13:33:46 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.037 13:33:46 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:50.037 13:33:46 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.037 Malloc0 00:05:50.037 13:33:46 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.037 Malloc1 00:05:50.037 13:33:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.037 13:33:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.037 13:33:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.037 13:33:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:50.037 13:33:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.037 13:33:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:50.037 13:33:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.037 13:33:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.037 13:33:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.037 13:33:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:50.037 13:33:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.037 13:33:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:50.037 13:33:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:50.037 13:33:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:50.038 13:33:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.038 13:33:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:50.296 /dev/nbd0 00:05:50.296 13:33:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:50.296 13:33:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:50.296 13:33:47 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:50.296 13:33:47 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:50.296 13:33:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:50.296 13:33:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:50.296 13:33:47 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:50.296 13:33:47 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:50.296 13:33:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:50.296 13:33:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:50.296 13:33:47 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.296 1+0 records in 00:05:50.296 1+0 records out 00:05:50.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254859 s, 16.1 MB/s 00:05:50.296 13:33:47 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.296 13:33:47 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:50.296 13:33:47 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.296 13:33:47 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:50.296 13:33:47 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:50.296 13:33:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.296 13:33:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.296 13:33:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:50.555 /dev/nbd1 00:05:50.555 13:33:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:50.555 13:33:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:50.555 13:33:47 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:50.555 13:33:47 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:50.555 13:33:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:50.555 13:33:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:50.555 13:33:47 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:50.555 13:33:47 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:50.555 13:33:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:50.555 13:33:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:50.555 13:33:47 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.555 1+0 records in 00:05:50.555 1+0 records out 00:05:50.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243496 s, 16.8 MB/s 00:05:50.555 13:33:47 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.555 13:33:47 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:50.555 13:33:47 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.555 13:33:47 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:50.555 13:33:47 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:50.555 13:33:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.555 13:33:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.555 13:33:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.555 13:33:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.555 13:33:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.813 13:33:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:50.813 { 00:05:50.813 "nbd_device": "/dev/nbd0", 00:05:50.814 "bdev_name": "Malloc0" 00:05:50.814 }, 00:05:50.814 { 00:05:50.814 "nbd_device": "/dev/nbd1", 00:05:50.814 "bdev_name": "Malloc1" 00:05:50.814 } 00:05:50.814 ]' 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:50.814 { 00:05:50.814 "nbd_device": "/dev/nbd0", 00:05:50.814 "bdev_name": "Malloc0" 00:05:50.814 }, 00:05:50.814 { 00:05:50.814 "nbd_device": "/dev/nbd1", 00:05:50.814 "bdev_name": "Malloc1" 00:05:50.814 } 00:05:50.814 ]' 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:50.814 /dev/nbd1' 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:50.814 /dev/nbd1' 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:50.814 256+0 records in 00:05:50.814 256+0 records out 00:05:50.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104893 s, 100 MB/s 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:50.814 256+0 records in 00:05:50.814 256+0 records out 00:05:50.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200037 s, 52.4 MB/s 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:50.814 256+0 records in 00:05:50.814 256+0 records out 00:05:50.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208291 s, 50.3 MB/s 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.814 13:33:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:51.072 13:33:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:51.072 13:33:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:51.072 13:33:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:51.072 13:33:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.072 13:33:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.072 13:33:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:51.072 13:33:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.072 13:33:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.072 13:33:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.072 13:33:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:51.330 13:33:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:51.330 13:33:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:51.330 13:33:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:51.330 13:33:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.330 13:33:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.330 13:33:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:51.330 13:33:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.330 13:33:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.330 13:33:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.330 13:33:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.330 13:33:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.589 13:33:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:51.589 13:33:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:51.589 13:33:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.589 13:33:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:51.589 13:33:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.589 13:33:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:51.589 13:33:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:51.589 13:33:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:51.589 13:33:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:51.589 13:33:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:51.589 13:33:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:51.589 13:33:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:51.589 13:33:48 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:51.589 13:33:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:51.848 [2024-07-25 13:33:48.627041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.848 [2024-07-25 13:33:48.661388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.848 [2024-07-25 13:33:48.661403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.848 [2024-07-25 13:33:48.703353] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:51.848 [2024-07-25 13:33:48.703394] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:55.182 13:33:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:55.182 13:33:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:55.182 spdk_app_start Round 2 00:05:55.182 13:33:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 84444 /var/tmp/spdk-nbd.sock 00:05:55.182 13:33:51 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 84444 ']' 00:05:55.182 13:33:51 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:55.182 13:33:51 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.182 13:33:51 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:55.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:55.182 13:33:51 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.182 13:33:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:55.182 13:33:51 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.182 13:33:51 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:55.182 13:33:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.182 Malloc0 00:05:55.182 13:33:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.182 Malloc1 00:05:55.182 13:33:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.182 13:33:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.182 13:33:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.182 13:33:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.182 13:33:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.182 13:33:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.182 13:33:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.182 13:33:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.182 13:33:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.182 13:33:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.182 13:33:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.182 13:33:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.182 13:33:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:55.182 13:33:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.182 13:33:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.182 13:33:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.441 /dev/nbd0 00:05:55.441 13:33:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.441 13:33:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.441 13:33:52 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:55.441 13:33:52 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:55.441 13:33:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:55.441 13:33:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:55.441 13:33:52 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:55.441 13:33:52 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:55.441 13:33:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:55.441 13:33:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:55.441 13:33:52 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.441 1+0 records in 00:05:55.441 1+0 records out 00:05:55.441 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176053 s, 23.3 MB/s 00:05:55.441 13:33:52 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.441 13:33:52 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:55.441 13:33:52 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.441 13:33:52 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:55.441 13:33:52 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:55.441 13:33:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.441 13:33:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.441 13:33:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:55.700 /dev/nbd1 00:05:55.700 13:33:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:55.700 13:33:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:55.700 13:33:52 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:55.700 13:33:52 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:55.700 13:33:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:55.700 13:33:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:55.700 13:33:52 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:55.700 13:33:52 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:55.700 13:33:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:55.700 13:33:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:55.700 13:33:52 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.700 1+0 records in 00:05:55.700 1+0 records out 00:05:55.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269403 s, 15.2 MB/s 00:05:55.700 13:33:52 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.700 13:33:52 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:55.700 13:33:52 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.700 13:33:52 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:55.700 13:33:52 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:55.700 13:33:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.700 13:33:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.700 13:33:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.700 13:33:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.700 13:33:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.700 13:33:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:55.700 { 00:05:55.700 "nbd_device": "/dev/nbd0", 00:05:55.700 "bdev_name": "Malloc0" 00:05:55.700 }, 00:05:55.700 { 00:05:55.700 "nbd_device": "/dev/nbd1", 00:05:55.700 "bdev_name": "Malloc1" 00:05:55.700 } 00:05:55.700 ]' 00:05:55.700 13:33:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:55.700 { 00:05:55.700 "nbd_device": "/dev/nbd0", 00:05:55.700 "bdev_name": "Malloc0" 00:05:55.700 }, 00:05:55.700 { 00:05:55.700 "nbd_device": "/dev/nbd1", 00:05:55.700 "bdev_name": "Malloc1" 00:05:55.700 } 00:05:55.700 ]' 00:05:55.700 13:33:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:55.960 /dev/nbd1' 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:55.960 /dev/nbd1' 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.960 256+0 records in 00:05:55.960 256+0 records out 00:05:55.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113785 s, 92.2 MB/s 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.960 256+0 records in 00:05:55.960 256+0 records out 00:05:55.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020006 s, 52.4 MB/s 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.960 256+0 records in 00:05:55.960 256+0 records out 00:05:55.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016188 s, 64.8 MB/s 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.960 13:33:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.219 13:33:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.219 13:33:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.219 13:33:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.219 13:33:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.219 13:33:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.219 13:33:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.219 13:33:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.220 13:33:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.220 13:33:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.220 13:33:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:56.220 13:33:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:56.220 13:33:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:56.220 13:33:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:56.220 13:33:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.220 13:33:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.220 13:33:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:56.478 13:33:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.478 13:33:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.478 13:33:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.478 13:33:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.478 13:33:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.478 13:33:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:56.478 13:33:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:56.478 13:33:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.478 13:33:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.479 13:33:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.479 13:33:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.479 13:33:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:56.479 13:33:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.479 13:33:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.479 13:33:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:56.479 13:33:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:56.479 13:33:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:56.479 13:33:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.738 13:33:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:56.997 [2024-07-25 13:33:53.687689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.997 [2024-07-25 13:33:53.722487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.997 [2024-07-25 13:33:53.722490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.997 [2024-07-25 13:33:53.762487] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:56.997 [2024-07-25 13:33:53.762530] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.284 13:33:56 event.app_repeat -- event/event.sh@38 -- # waitforlisten 84444 /var/tmp/spdk-nbd.sock 00:06:00.284 13:33:56 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 84444 ']' 00:06:00.284 13:33:56 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.284 13:33:56 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.284 13:33:56 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.284 13:33:56 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.284 13:33:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.284 13:33:56 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.284 13:33:56 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:00.284 13:33:56 event.app_repeat -- event/event.sh@39 -- # killprocess 84444 00:06:00.284 13:33:56 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 84444 ']' 00:06:00.284 13:33:56 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 84444 00:06:00.284 13:33:56 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:00.284 13:33:56 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.284 13:33:56 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84444 00:06:00.284 13:33:56 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.284 13:33:56 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.284 13:33:56 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84444' 00:06:00.284 killing process with pid 84444 00:06:00.284 13:33:56 event.app_repeat -- common/autotest_common.sh@969 -- # kill 84444 00:06:00.284 13:33:56 event.app_repeat -- common/autotest_common.sh@974 -- # wait 84444 00:06:00.284 spdk_app_start is called in Round 0. 00:06:00.284 Shutdown signal received, stop current app iteration 00:06:00.284 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 reinitialization... 00:06:00.284 spdk_app_start is called in Round 1. 00:06:00.284 Shutdown signal received, stop current app iteration 00:06:00.284 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 reinitialization... 00:06:00.284 spdk_app_start is called in Round 2. 00:06:00.284 Shutdown signal received, stop current app iteration 00:06:00.284 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 reinitialization... 00:06:00.284 spdk_app_start is called in Round 3. 00:06:00.284 Shutdown signal received, stop current app iteration 00:06:00.284 13:33:56 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:00.284 13:33:56 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:00.284 00:06:00.284 real 0m15.635s 00:06:00.284 user 0m33.328s 00:06:00.284 sys 0m2.969s 00:06:00.284 13:33:56 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.284 13:33:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.284 ************************************ 00:06:00.284 END TEST app_repeat 00:06:00.284 ************************************ 00:06:00.284 13:33:56 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:00.284 13:33:56 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:00.284 13:33:56 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.284 13:33:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.284 13:33:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.284 ************************************ 00:06:00.284 START TEST cpu_locks 00:06:00.284 ************************************ 00:06:00.284 13:33:56 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:00.284 * Looking for test storage... 00:06:00.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:00.284 13:33:57 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:00.284 13:33:57 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:00.284 13:33:57 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:00.284 13:33:57 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:00.284 13:33:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.284 13:33:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.284 13:33:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.284 ************************************ 00:06:00.284 START TEST default_locks 00:06:00.284 ************************************ 00:06:00.284 13:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:00.284 13:33:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=87521 00:06:00.284 13:33:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.284 13:33:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 87521 00:06:00.284 13:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 87521 ']' 00:06:00.284 13:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.284 13:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.284 13:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.284 13:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.284 13:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.543 [2024-07-25 13:33:57.173234] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:00.544 [2024-07-25 13:33:57.173276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87521 ] 00:06:00.544 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.544 [2024-07-25 13:33:57.208321] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:00.544 [2024-07-25 13:33:57.239737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.544 [2024-07-25 13:33:57.278853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.802 13:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.802 13:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:00.802 13:33:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 87521 00:06:00.802 13:33:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 87521 00:06:00.802 13:33:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.369 lslocks: write error 00:06:01.369 13:33:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 87521 00:06:01.369 13:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 87521 ']' 00:06:01.369 13:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 87521 00:06:01.369 13:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:01.369 13:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.369 13:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87521 00:06:01.369 13:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:01.369 13:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:01.369 13:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87521' 00:06:01.369 killing process with pid 87521 00:06:01.369 13:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 87521 00:06:01.369 13:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 87521 00:06:01.629 13:33:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 87521 00:06:01.629 13:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:01.629 13:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 87521 00:06:01.629 13:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:01.629 13:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.629 13:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:01.629 13:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.629 13:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 87521 00:06:01.629 13:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 87521 ']' 00:06:01.629 13:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.629 13:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.629 13:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.629 13:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.629 13:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (87521) - No such process 00:06:01.629 ERROR: process (pid: 87521) is no longer running 00:06:01.629 13:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.629 13:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:01.629 13:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:01.629 13:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:01.629 13:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:01.629 13:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:01.629 13:33:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:01.629 13:33:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:01.629 13:33:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:01.629 13:33:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:01.629 00:06:01.629 real 0m1.203s 00:06:01.629 user 0m1.181s 00:06:01.629 sys 0m0.581s 00:06:01.629 13:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.629 13:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.629 ************************************ 00:06:01.629 END TEST default_locks 00:06:01.629 ************************************ 00:06:01.629 13:33:58 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:01.629 13:33:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.629 13:33:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.629 13:33:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.629 ************************************ 00:06:01.629 START TEST default_locks_via_rpc 00:06:01.629 ************************************ 00:06:01.629 13:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:01.629 13:33:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.629 13:33:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=87704 00:06:01.629 13:33:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 87704 00:06:01.629 13:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 87704 ']' 00:06:01.629 13:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.629 13:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.629 13:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.629 13:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.629 13:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.629 [2024-07-25 13:33:58.446183] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:01.629 [2024-07-25 13:33:58.446225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87704 ] 00:06:01.629 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.629 [2024-07-25 13:33:58.481675] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:01.889 [2024-07-25 13:33:58.517851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.889 [2024-07-25 13:33:58.555181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.889 13:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.889 13:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:01.889 13:33:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:01.889 13:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.889 13:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.889 13:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.889 13:33:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:01.889 13:33:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:01.889 13:33:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:01.889 13:33:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:01.889 13:33:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:01.889 13:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.889 13:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.889 13:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.889 13:33:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 87704 00:06:01.889 13:33:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 87704 00:06:01.889 13:33:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.457 13:33:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 87704 00:06:02.457 13:33:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 87704 ']' 00:06:02.457 13:33:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 87704 00:06:02.457 13:33:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:02.457 13:33:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.457 13:33:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87704 00:06:02.458 13:33:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.458 13:33:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.458 13:33:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87704' 00:06:02.458 killing process with pid 87704 00:06:02.458 13:33:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 87704 00:06:02.458 13:33:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 87704 00:06:03.026 00:06:03.026 real 0m1.222s 00:06:03.026 user 0m1.194s 00:06:03.026 sys 0m0.549s 00:06:03.026 13:33:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.026 13:33:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.026 ************************************ 00:06:03.026 END TEST default_locks_via_rpc 00:06:03.026 ************************************ 00:06:03.026 13:33:59 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:03.026 13:33:59 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.026 13:33:59 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.026 13:33:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.026 ************************************ 00:06:03.026 START TEST non_locking_app_on_locked_coremask 00:06:03.026 ************************************ 00:06:03.026 13:33:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:03.026 13:33:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=87871 00:06:03.026 13:33:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 87871 /var/tmp/spdk.sock 00:06:03.026 13:33:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.026 13:33:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 87871 ']' 00:06:03.026 13:33:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.026 13:33:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.026 13:33:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.026 13:33:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.026 13:33:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.026 [2024-07-25 13:33:59.748971] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:03.026 [2024-07-25 13:33:59.749017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87871 ] 00:06:03.026 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.026 [2024-07-25 13:33:59.785507] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:03.026 [2024-07-25 13:33:59.820904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.026 [2024-07-25 13:33:59.860784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.962 13:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.962 13:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:03.962 13:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=88118 00:06:03.962 13:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 88118 /var/tmp/spdk2.sock 00:06:03.962 13:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:03.962 13:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 88118 ']' 00:06:03.962 13:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.962 13:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.962 13:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.962 13:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.962 13:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.962 [2024-07-25 13:34:00.593229] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:03.962 [2024-07-25 13:34:00.593283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88118 ] 00:06:03.962 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.962 [2024-07-25 13:34:00.631261] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:03.962 [2024-07-25 13:34:00.688396] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.962 [2024-07-25 13:34:00.688415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.962 [2024-07-25 13:34:00.762327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.530 13:34:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.530 13:34:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:04.530 13:34:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 87871 00:06:04.530 13:34:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 87871 00:06:04.530 13:34:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.907 lslocks: write error 00:06:05.907 13:34:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 87871 00:06:05.907 13:34:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 87871 ']' 00:06:05.907 13:34:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 87871 00:06:05.907 13:34:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:05.907 13:34:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.907 13:34:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87871 00:06:05.907 13:34:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:05.907 13:34:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:05.907 13:34:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87871' 00:06:05.907 killing process with pid 87871 00:06:05.907 13:34:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 87871 00:06:05.907 13:34:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 87871 00:06:06.475 13:34:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 88118 00:06:06.475 13:34:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 88118 ']' 00:06:06.475 13:34:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 88118 00:06:06.475 13:34:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:06.475 13:34:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.475 13:34:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88118 00:06:06.475 13:34:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.475 13:34:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.475 13:34:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88118' 00:06:06.475 killing process with pid 88118 00:06:06.475 13:34:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 88118 00:06:06.475 13:34:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 88118 00:06:06.734 00:06:06.734 real 0m3.870s 00:06:06.734 user 0m4.126s 00:06:06.734 sys 0m1.367s 00:06:06.734 13:34:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.734 13:34:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.734 ************************************ 00:06:06.734 END TEST non_locking_app_on_locked_coremask 00:06:06.734 ************************************ 00:06:06.734 13:34:03 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:06.734 13:34:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.734 13:34:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.734 13:34:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.993 ************************************ 00:06:06.993 START TEST locking_app_on_unlocked_coremask 00:06:06.993 ************************************ 00:06:06.994 13:34:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:06.994 13:34:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=88679 00:06:06.994 13:34:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 88679 /var/tmp/spdk.sock 00:06:06.994 13:34:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:06.994 13:34:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 88679 ']' 00:06:06.994 13:34:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.994 13:34:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.994 13:34:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.994 13:34:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.994 13:34:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.994 [2024-07-25 13:34:03.697659] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:06.994 [2024-07-25 13:34:03.697707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88679 ] 00:06:06.994 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.994 [2024-07-25 13:34:03.732156] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:06.994 [2024-07-25 13:34:03.766968] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:06.994 [2024-07-25 13:34:03.766990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.994 [2024-07-25 13:34:03.801368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.930 13:34:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.930 13:34:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:07.930 13:34:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:07.930 13:34:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=88813 00:06:07.930 13:34:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 88813 /var/tmp/spdk2.sock 00:06:07.930 13:34:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 88813 ']' 00:06:07.930 13:34:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.930 13:34:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.930 13:34:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.930 13:34:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.930 13:34:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.930 [2024-07-25 13:34:04.532734] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:07.930 [2024-07-25 13:34:04.532789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88813 ] 00:06:07.930 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.930 [2024-07-25 13:34:04.574323] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:07.930 [2024-07-25 13:34:04.632543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.930 [2024-07-25 13:34:04.710633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.497 13:34:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.497 13:34:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:08.497 13:34:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 88813 00:06:08.497 13:34:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 88813 00:06:08.497 13:34:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.875 lslocks: write error 00:06:09.875 13:34:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 88679 00:06:09.875 13:34:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 88679 ']' 00:06:09.875 13:34:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 88679 00:06:09.875 13:34:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:09.875 13:34:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.875 13:34:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88679 00:06:09.875 13:34:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.875 13:34:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.875 13:34:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88679' 00:06:09.875 killing process with pid 88679 00:06:09.875 13:34:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 88679 00:06:09.875 13:34:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 88679 00:06:10.444 13:34:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 88813 00:06:10.444 13:34:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 88813 ']' 00:06:10.444 13:34:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 88813 00:06:10.444 13:34:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:10.444 13:34:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.444 13:34:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88813 00:06:10.444 13:34:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:10.444 13:34:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:10.444 13:34:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88813' 00:06:10.444 killing process with pid 88813 00:06:10.444 13:34:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 88813 00:06:10.444 13:34:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 88813 00:06:11.014 00:06:11.014 real 0m3.970s 00:06:11.014 user 0m4.237s 00:06:11.014 sys 0m1.327s 00:06:11.014 13:34:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.014 13:34:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.014 ************************************ 00:06:11.014 END TEST locking_app_on_unlocked_coremask 00:06:11.014 ************************************ 00:06:11.014 13:34:07 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:11.014 13:34:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.014 13:34:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.014 13:34:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.014 ************************************ 00:06:11.014 START TEST locking_app_on_locked_coremask 00:06:11.014 ************************************ 00:06:11.014 13:34:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:11.014 13:34:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=89424 00:06:11.014 13:34:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 89424 /var/tmp/spdk.sock 00:06:11.014 13:34:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.014 13:34:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 89424 ']' 00:06:11.014 13:34:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.014 13:34:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.014 13:34:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.014 13:34:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.014 13:34:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.014 [2024-07-25 13:34:07.750786] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:11.014 [2024-07-25 13:34:07.750837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89424 ] 00:06:11.014 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.014 [2024-07-25 13:34:07.787610] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:11.014 [2024-07-25 13:34:07.823340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.014 [2024-07-25 13:34:07.861111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.658 13:34:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.658 13:34:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:11.658 13:34:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=89520 00:06:11.658 13:34:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 89520 /var/tmp/spdk2.sock 00:06:11.658 13:34:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:11.658 13:34:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:11.658 13:34:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 89520 /var/tmp/spdk2.sock 00:06:11.658 13:34:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:11.658 13:34:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.658 13:34:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:11.658 13:34:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.658 13:34:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 89520 /var/tmp/spdk2.sock 00:06:11.658 13:34:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 89520 ']' 00:06:11.658 13:34:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.658 13:34:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.658 13:34:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.658 13:34:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.658 13:34:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.917 [2024-07-25 13:34:08.587757] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:11.917 [2024-07-25 13:34:08.587807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89520 ] 00:06:11.917 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.917 [2024-07-25 13:34:08.625322] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:11.917 [2024-07-25 13:34:08.684166] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 89424 has claimed it. 00:06:11.917 [2024-07-25 13:34:08.684199] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:12.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (89520) - No such process 00:06:12.485 ERROR: process (pid: 89520) is no longer running 00:06:12.485 13:34:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.485 13:34:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:12.485 13:34:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:12.485 13:34:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:12.485 13:34:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:12.485 13:34:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:12.485 13:34:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 89424 00:06:12.485 13:34:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 89424 00:06:12.485 13:34:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.422 lslocks: write error 00:06:13.422 13:34:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 89424 00:06:13.422 13:34:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 89424 ']' 00:06:13.422 13:34:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 89424 00:06:13.422 13:34:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:13.422 13:34:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.422 13:34:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89424 00:06:13.422 13:34:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.422 13:34:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.422 13:34:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89424' 00:06:13.422 killing process with pid 89424 00:06:13.422 13:34:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 89424 00:06:13.422 13:34:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 89424 00:06:13.681 00:06:13.681 real 0m2.636s 00:06:13.681 user 0m2.873s 00:06:13.681 sys 0m0.846s 00:06:13.681 13:34:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.681 13:34:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.681 ************************************ 00:06:13.681 END TEST locking_app_on_locked_coremask 00:06:13.681 ************************************ 00:06:13.681 13:34:10 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:13.681 13:34:10 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.681 13:34:10 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.681 13:34:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.681 ************************************ 00:06:13.681 START TEST locking_overlapped_coremask 00:06:13.681 ************************************ 00:06:13.681 13:34:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:13.681 13:34:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=89858 00:06:13.681 13:34:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 89858 /var/tmp/spdk.sock 00:06:13.681 13:34:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:13.681 13:34:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 89858 ']' 00:06:13.681 13:34:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.681 13:34:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.681 13:34:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.681 13:34:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.681 13:34:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.681 [2024-07-25 13:34:10.460423] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:13.681 [2024-07-25 13:34:10.460470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89858 ] 00:06:13.681 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.681 [2024-07-25 13:34:10.495324] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:13.681 [2024-07-25 13:34:10.530778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.940 [2024-07-25 13:34:10.572442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.940 [2024-07-25 13:34:10.572536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.940 [2024-07-25 13:34:10.572536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.546 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.546 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:14.546 13:34:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=90083 00:06:14.546 13:34:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 90083 /var/tmp/spdk2.sock 00:06:14.546 13:34:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:14.546 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:14.546 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 90083 /var/tmp/spdk2.sock 00:06:14.546 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:14.546 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.546 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:14.546 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.546 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 90083 /var/tmp/spdk2.sock 00:06:14.546 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 90083 ']' 00:06:14.546 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.546 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.546 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.546 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.546 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.546 [2024-07-25 13:34:11.305214] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:14.546 [2024-07-25 13:34:11.305266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90083 ] 00:06:14.546 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.546 [2024-07-25 13:34:11.343648] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:14.546 [2024-07-25 13:34:11.406311] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 89858 has claimed it. 00:06:14.546 [2024-07-25 13:34:11.406344] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:15.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (90083) - No such process 00:06:15.115 ERROR: process (pid: 90083) is no longer running 00:06:15.116 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.116 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:15.116 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:15.116 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:15.116 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:15.116 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:15.116 13:34:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:15.116 13:34:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:15.116 13:34:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:15.116 13:34:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:15.116 13:34:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 89858 00:06:15.116 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 89858 ']' 00:06:15.116 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 89858 00:06:15.116 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:15.116 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.116 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89858 00:06:15.116 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.116 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.116 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89858' 00:06:15.116 killing process with pid 89858 00:06:15.116 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 89858 00:06:15.116 13:34:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 89858 00:06:15.684 00:06:15.684 real 0m1.869s 00:06:15.684 user 0m5.255s 00:06:15.684 sys 0m0.477s 00:06:15.684 13:34:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.684 13:34:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.684 ************************************ 00:06:15.684 END TEST locking_overlapped_coremask 00:06:15.684 ************************************ 00:06:15.684 13:34:12 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:15.684 13:34:12 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.684 13:34:12 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.684 13:34:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.684 ************************************ 00:06:15.684 START TEST locking_overlapped_coremask_via_rpc 00:06:15.684 ************************************ 00:06:15.684 13:34:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:15.685 13:34:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=90361 00:06:15.685 13:34:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 90361 /var/tmp/spdk.sock 00:06:15.685 13:34:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:15.685 13:34:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 90361 ']' 00:06:15.685 13:34:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.685 13:34:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.685 13:34:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.685 13:34:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.685 13:34:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.685 [2024-07-25 13:34:12.410823] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:15.685 [2024-07-25 13:34:12.410872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90361 ] 00:06:15.685 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.685 [2024-07-25 13:34:12.448286] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:15.685 [2024-07-25 13:34:12.482723] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.685 [2024-07-25 13:34:12.482743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.685 [2024-07-25 13:34:12.523826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.685 [2024-07-25 13:34:12.523920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.685 [2024-07-25 13:34:12.523922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.621 13:34:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.621 13:34:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:16.621 13:34:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=90394 00:06:16.621 13:34:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 90394 /var/tmp/spdk2.sock 00:06:16.621 13:34:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:16.621 13:34:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 90394 ']' 00:06:16.621 13:34:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.621 13:34:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.621 13:34:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.621 13:34:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.621 13:34:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.622 [2024-07-25 13:34:13.272132] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:16.622 [2024-07-25 13:34:13.272185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90394 ] 00:06:16.622 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.622 [2024-07-25 13:34:13.308627] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:16.622 [2024-07-25 13:34:13.371378] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.622 [2024-07-25 13:34:13.371399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.622 [2024-07-25 13:34:13.452028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.622 [2024-07-25 13:34:13.455764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.622 [2024-07-25 13:34:13.455765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:17.189 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.189 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:17.189 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:17.189 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.189 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.448 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.448 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:17.448 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:17.448 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:17.448 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:17.448 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.448 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:17.448 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.448 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:17.448 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.448 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.448 [2024-07-25 13:34:14.088790] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 90361 has claimed it. 00:06:17.448 request: 00:06:17.449 { 00:06:17.449 "method": "framework_enable_cpumask_locks", 00:06:17.449 "req_id": 1 00:06:17.449 } 00:06:17.449 Got JSON-RPC error response 00:06:17.449 response: 00:06:17.449 { 00:06:17.449 "code": -32603, 00:06:17.449 "message": "Failed to claim CPU core: 2" 00:06:17.449 } 00:06:17.449 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:17.449 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:17.449 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.449 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:17.449 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.449 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 90361 /var/tmp/spdk.sock 00:06:17.449 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 90361 ']' 00:06:17.449 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.449 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.449 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.449 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.449 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.449 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.449 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:17.449 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 90394 /var/tmp/spdk2.sock 00:06:17.449 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 90394 ']' 00:06:17.449 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.449 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.449 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.449 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.449 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.709 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.709 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:17.709 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:17.709 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:17.709 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:17.709 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:17.709 00:06:17.709 real 0m2.107s 00:06:17.709 user 0m0.830s 00:06:17.709 sys 0m0.212s 00:06:17.709 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.709 13:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.709 ************************************ 00:06:17.709 END TEST locking_overlapped_coremask_via_rpc 00:06:17.709 ************************************ 00:06:17.709 13:34:14 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:17.709 13:34:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 90361 ]] 00:06:17.709 13:34:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 90361 00:06:17.709 13:34:14 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 90361 ']' 00:06:17.709 13:34:14 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 90361 00:06:17.709 13:34:14 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:17.709 13:34:14 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.709 13:34:14 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90361 00:06:17.709 13:34:14 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.709 13:34:14 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.709 13:34:14 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90361' 00:06:17.709 killing process with pid 90361 00:06:17.709 13:34:14 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 90361 00:06:17.709 13:34:14 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 90361 00:06:18.277 13:34:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 90394 ]] 00:06:18.277 13:34:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 90394 00:06:18.277 13:34:14 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 90394 ']' 00:06:18.277 13:34:14 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 90394 00:06:18.277 13:34:14 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:18.277 13:34:14 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.277 13:34:14 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90394 00:06:18.277 13:34:14 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:18.277 13:34:14 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:18.277 13:34:14 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90394' 00:06:18.277 killing process with pid 90394 00:06:18.277 13:34:14 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 90394 00:06:18.277 13:34:14 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 90394 00:06:18.537 13:34:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:18.537 13:34:15 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:18.537 13:34:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 90361 ]] 00:06:18.537 13:34:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 90361 00:06:18.537 13:34:15 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 90361 ']' 00:06:18.537 13:34:15 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 90361 00:06:18.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (90361) - No such process 00:06:18.537 13:34:15 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 90361 is not found' 00:06:18.537 Process with pid 90361 is not found 00:06:18.537 13:34:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 90394 ]] 00:06:18.537 13:34:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 90394 00:06:18.537 13:34:15 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 90394 ']' 00:06:18.537 13:34:15 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 90394 00:06:18.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (90394) - No such process 00:06:18.537 13:34:15 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 90394 is not found' 00:06:18.537 Process with pid 90394 is not found 00:06:18.537 13:34:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:18.537 00:06:18.537 real 0m18.260s 00:06:18.537 user 0m30.395s 00:06:18.537 sys 0m6.392s 00:06:18.537 13:34:15 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.537 13:34:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.537 ************************************ 00:06:18.537 END TEST cpu_locks 00:06:18.537 ************************************ 00:06:18.537 00:06:18.537 real 0m42.325s 00:06:18.537 user 1m18.408s 00:06:18.537 sys 0m10.452s 00:06:18.537 13:34:15 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.537 13:34:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.537 ************************************ 00:06:18.537 END TEST event 00:06:18.537 ************************************ 00:06:18.537 13:34:15 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:18.537 13:34:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.537 13:34:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.537 13:34:15 -- common/autotest_common.sh@10 -- # set +x 00:06:18.537 ************************************ 00:06:18.537 START TEST thread 00:06:18.537 ************************************ 00:06:18.537 13:34:15 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:18.796 * Looking for test storage... 00:06:18.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:18.796 13:34:15 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:18.796 13:34:15 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:18.796 13:34:15 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.796 13:34:15 thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.796 ************************************ 00:06:18.796 START TEST thread_poller_perf 00:06:18.796 ************************************ 00:06:18.796 13:34:15 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:18.796 [2024-07-25 13:34:15.526943] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:18.796 [2024-07-25 13:34:15.527024] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91008 ] 00:06:18.796 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.796 [2024-07-25 13:34:15.565597] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:18.796 [2024-07-25 13:34:15.599896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.796 [2024-07-25 13:34:15.639050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.796 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:20.174 ====================================== 00:06:20.174 busy:2505698994 (cyc) 00:06:20.174 total_run_count: 435000 00:06:20.174 tsc_hz: 2500000000 (cyc) 00:06:20.174 ====================================== 00:06:20.174 poller_cost: 5760 (cyc), 2304 (nsec) 00:06:20.174 00:06:20.174 real 0m1.198s 00:06:20.174 user 0m1.103s 00:06:20.174 sys 0m0.092s 00:06:20.174 13:34:16 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.174 13:34:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:20.174 ************************************ 00:06:20.174 END TEST thread_poller_perf 00:06:20.174 ************************************ 00:06:20.174 13:34:16 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:20.174 13:34:16 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:20.174 13:34:16 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.174 13:34:16 thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.174 ************************************ 00:06:20.174 START TEST thread_poller_perf 00:06:20.174 ************************************ 00:06:20.174 13:34:16 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:20.174 [2024-07-25 13:34:16.796656] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:20.174 [2024-07-25 13:34:16.796748] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91174 ] 00:06:20.174 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.174 [2024-07-25 13:34:16.835261] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:20.174 [2024-07-25 13:34:16.868917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.174 [2024-07-25 13:34:16.905933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.174 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:21.112 ====================================== 00:06:21.112 busy:2501708564 (cyc) 00:06:21.112 total_run_count: 5756000 00:06:21.112 tsc_hz: 2500000000 (cyc) 00:06:21.112 ====================================== 00:06:21.112 poller_cost: 434 (cyc), 173 (nsec) 00:06:21.112 00:06:21.112 real 0m1.188s 00:06:21.112 user 0m1.103s 00:06:21.112 sys 0m0.082s 00:06:21.112 13:34:17 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.112 13:34:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.112 ************************************ 00:06:21.112 END TEST thread_poller_perf 00:06:21.112 ************************************ 00:06:21.371 13:34:18 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:21.371 00:06:21.371 real 0m2.646s 00:06:21.371 user 0m2.297s 00:06:21.371 sys 0m0.362s 00:06:21.371 13:34:18 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.371 13:34:18 thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.371 ************************************ 00:06:21.371 END TEST thread 00:06:21.371 ************************************ 00:06:21.371 13:34:18 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:06:21.371 13:34:18 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:21.371 13:34:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.371 13:34:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.371 13:34:18 -- common/autotest_common.sh@10 -- # set +x 00:06:21.371 ************************************ 00:06:21.371 START TEST app_cmdline 00:06:21.371 ************************************ 00:06:21.371 13:34:18 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:21.371 * Looking for test storage... 00:06:21.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:21.371 13:34:18 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:21.372 13:34:18 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=91448 00:06:21.372 13:34:18 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 91448 00:06:21.372 13:34:18 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:21.372 13:34:18 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 91448 ']' 00:06:21.372 13:34:18 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.372 13:34:18 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.372 13:34:18 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.372 13:34:18 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.372 13:34:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:21.372 [2024-07-25 13:34:18.245951] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:21.372 [2024-07-25 13:34:18.246003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91448 ] 00:06:21.631 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.631 [2024-07-25 13:34:18.285350] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:21.631 [2024-07-25 13:34:18.319217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.631 [2024-07-25 13:34:18.357866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.198 13:34:19 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.198 13:34:19 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:22.198 13:34:19 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:22.457 { 00:06:22.457 "version": "SPDK v24.09-pre git sha1 704257090", 00:06:22.457 "fields": { 00:06:22.457 "major": 24, 00:06:22.457 "minor": 9, 00:06:22.457 "patch": 0, 00:06:22.457 "suffix": "-pre", 00:06:22.457 "commit": "704257090" 00:06:22.457 } 00:06:22.457 } 00:06:22.457 13:34:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:22.457 13:34:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:22.458 13:34:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:22.458 13:34:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:22.458 13:34:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:22.458 13:34:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:22.458 13:34:19 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.458 13:34:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:22.458 13:34:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:22.458 13:34:19 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.458 13:34:19 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:22.458 13:34:19 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:22.458 13:34:19 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.458 13:34:19 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:22.458 13:34:19 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.458 13:34:19 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.458 13:34:19 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.458 13:34:19 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.458 13:34:19 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.458 13:34:19 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.458 13:34:19 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.458 13:34:19 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.458 13:34:19 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:22.458 13:34:19 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.717 request: 00:06:22.717 { 00:06:22.717 "method": "env_dpdk_get_mem_stats", 00:06:22.717 "req_id": 1 00:06:22.717 } 00:06:22.717 Got JSON-RPC error response 00:06:22.717 response: 00:06:22.717 { 00:06:22.717 "code": -32601, 00:06:22.717 "message": "Method not found" 00:06:22.717 } 00:06:22.717 13:34:19 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:22.717 13:34:19 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:22.717 13:34:19 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:22.717 13:34:19 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:22.717 13:34:19 app_cmdline -- app/cmdline.sh@1 -- # killprocess 91448 00:06:22.717 13:34:19 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 91448 ']' 00:06:22.717 13:34:19 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 91448 00:06:22.717 13:34:19 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:22.717 13:34:19 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.717 13:34:19 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91448 00:06:22.717 13:34:19 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.717 13:34:19 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.717 13:34:19 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91448' 00:06:22.717 killing process with pid 91448 00:06:22.717 13:34:19 app_cmdline -- common/autotest_common.sh@969 -- # kill 91448 00:06:22.717 13:34:19 app_cmdline -- common/autotest_common.sh@974 -- # wait 91448 00:06:22.977 00:06:22.977 real 0m1.680s 00:06:22.977 user 0m1.945s 00:06:22.977 sys 0m0.487s 00:06:22.977 13:34:19 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.977 13:34:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:22.977 ************************************ 00:06:22.977 END TEST app_cmdline 00:06:22.977 ************************************ 00:06:22.977 13:34:19 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:22.977 13:34:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.977 13:34:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.977 13:34:19 -- common/autotest_common.sh@10 -- # set +x 00:06:22.977 ************************************ 00:06:22.977 START TEST version 00:06:22.977 ************************************ 00:06:22.977 13:34:19 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:23.236 * Looking for test storage... 00:06:23.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:23.236 13:34:19 version -- app/version.sh@17 -- # get_header_version major 00:06:23.236 13:34:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.236 13:34:19 version -- app/version.sh@14 -- # cut -f2 00:06:23.236 13:34:19 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.237 13:34:19 version -- app/version.sh@17 -- # major=24 00:06:23.237 13:34:19 version -- app/version.sh@18 -- # get_header_version minor 00:06:23.237 13:34:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.237 13:34:19 version -- app/version.sh@14 -- # cut -f2 00:06:23.237 13:34:19 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.237 13:34:19 version -- app/version.sh@18 -- # minor=9 00:06:23.237 13:34:19 version -- app/version.sh@19 -- # get_header_version patch 00:06:23.237 13:34:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.237 13:34:19 version -- app/version.sh@14 -- # cut -f2 00:06:23.237 13:34:19 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.237 13:34:19 version -- app/version.sh@19 -- # patch=0 00:06:23.237 13:34:19 version -- app/version.sh@20 -- # get_header_version suffix 00:06:23.237 13:34:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.237 13:34:19 version -- app/version.sh@14 -- # cut -f2 00:06:23.237 13:34:19 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.237 13:34:19 version -- app/version.sh@20 -- # suffix=-pre 00:06:23.237 13:34:19 version -- app/version.sh@22 -- # version=24.9 00:06:23.237 13:34:19 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:23.237 13:34:19 version -- app/version.sh@28 -- # version=24.9rc0 00:06:23.237 13:34:19 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:23.237 13:34:19 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:23.237 13:34:20 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:23.237 13:34:20 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:23.237 00:06:23.237 real 0m0.188s 00:06:23.237 user 0m0.091s 00:06:23.237 sys 0m0.140s 00:06:23.237 13:34:20 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.237 13:34:20 version -- common/autotest_common.sh@10 -- # set +x 00:06:23.237 ************************************ 00:06:23.237 END TEST version 00:06:23.237 ************************************ 00:06:23.237 13:34:20 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:06:23.237 13:34:20 -- spdk/autotest.sh@202 -- # uname -s 00:06:23.237 13:34:20 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:06:23.237 13:34:20 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:23.237 13:34:20 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:23.237 13:34:20 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:06:23.237 13:34:20 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:23.237 13:34:20 -- spdk/autotest.sh@264 -- # timing_exit lib 00:06:23.237 13:34:20 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:23.237 13:34:20 -- common/autotest_common.sh@10 -- # set +x 00:06:23.496 13:34:20 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:23.496 13:34:20 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:06:23.496 13:34:20 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:06:23.496 13:34:20 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:06:23.496 13:34:20 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:06:23.496 13:34:20 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:06:23.496 13:34:20 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:23.496 13:34:20 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:23.496 13:34:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.496 13:34:20 -- common/autotest_common.sh@10 -- # set +x 00:06:23.496 ************************************ 00:06:23.496 START TEST nvmf_tcp 00:06:23.496 ************************************ 00:06:23.496 13:34:20 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:23.496 * Looking for test storage... 00:06:23.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:23.496 13:34:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:23.496 13:34:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:23.496 13:34:20 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:23.496 13:34:20 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:23.496 13:34:20 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.496 13:34:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.496 ************************************ 00:06:23.496 START TEST nvmf_target_core 00:06:23.496 ************************************ 00:06:23.496 13:34:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:23.756 * Looking for test storage... 00:06:23.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:23.756 ************************************ 00:06:23.756 START TEST nvmf_abort 00:06:23.756 ************************************ 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:23.756 * Looking for test storage... 00:06:23.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.756 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:06:23.757 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:30.406 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:30.406 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:06:30.406 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:30.407 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:30.407 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:30.407 Found net devices under 0000:af:00.0: cvl_0_0 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:30.407 Found net devices under 0000:af:00.1: cvl_0_1 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:30.407 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:30.407 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:30.407 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:30.407 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:30.407 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:30.407 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:30.407 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:30.407 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:30.407 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:30.407 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:30.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:30.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:06:30.407 00:06:30.407 --- 10.0.0.2 ping statistics --- 00:06:30.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:30.407 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:06:30.407 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:30.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:30.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:06:30.407 00:06:30.407 --- 10.0.0.1 ping statistics --- 00:06:30.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:30.407 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:06:30.667 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:30.667 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:06:30.667 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:30.667 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:30.667 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:30.667 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:30.667 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:30.667 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:30.667 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:30.667 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:30.667 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:30.667 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:30.667 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:30.667 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=95174 00:06:30.667 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:30.667 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 95174 00:06:30.667 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 95174 ']' 00:06:30.667 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.667 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.667 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.667 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.667 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:30.667 [2024-07-25 13:34:27.386544] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:30.667 [2024-07-25 13:34:27.386595] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:30.667 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.667 [2024-07-25 13:34:27.429365] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:30.667 [2024-07-25 13:34:27.463633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.667 [2024-07-25 13:34:27.506216] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:30.667 [2024-07-25 13:34:27.506255] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:30.667 [2024-07-25 13:34:27.506265] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:30.667 [2024-07-25 13:34:27.506273] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:30.667 [2024-07-25 13:34:27.506280] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:30.667 [2024-07-25 13:34:27.506381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.667 [2024-07-25 13:34:27.506486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.667 [2024-07-25 13:34:27.506488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.602 [2024-07-25 13:34:28.242080] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.602 Malloc0 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.602 Delay0 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.602 [2024-07-25 13:34:28.325996] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.602 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:31.602 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.602 [2024-07-25 13:34:28.442642] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:34.162 Initializing NVMe Controllers 00:06:34.162 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:34.162 controller IO queue size 128 less than required 00:06:34.162 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:34.162 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:34.162 Initialization complete. Launching workers. 00:06:34.162 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 41181 00:06:34.162 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41242, failed to submit 62 00:06:34.162 success 41185, unsuccess 57, failed 0 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:34.162 rmmod nvme_tcp 00:06:34.162 rmmod nvme_fabrics 00:06:34.162 rmmod nvme_keyring 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 95174 ']' 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 95174 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 95174 ']' 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 95174 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95174 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95174' 00:06:34.162 killing process with pid 95174 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 95174 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 95174 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:34.162 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:36.068 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:36.068 00:06:36.068 real 0m12.399s 00:06:36.068 user 0m13.327s 00:06:36.068 sys 0m6.195s 00:06:36.068 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.068 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:36.068 ************************************ 00:06:36.068 END TEST nvmf_abort 00:06:36.068 ************************************ 00:06:36.068 13:34:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:36.068 13:34:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:36.068 13:34:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.068 13:34:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:36.327 ************************************ 00:06:36.327 START TEST nvmf_ns_hotplug_stress 00:06:36.327 ************************************ 00:06:36.327 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:36.327 * Looking for test storage... 00:06:36.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:06:36.328 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:42.897 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:42.897 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:42.897 Found net devices under 0000:af:00.0: cvl_0_0 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:42.897 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:42.898 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.898 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:42.898 Found net devices under 0000:af:00.1: cvl_0_1 00:06:42.898 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.898 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:42.898 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:06:42.898 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:42.898 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:42.898 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:42.898 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:42.898 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:42.898 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:42.898 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:42.898 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:42.898 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:42.898 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:42.898 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:42.898 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:42.898 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:42.898 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:42.898 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:42.898 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:43.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:43.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:06:43.157 00:06:43.157 --- 10.0.0.2 ping statistics --- 00:06:43.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.157 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:43.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:43.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:06:43.157 00:06:43.157 --- 10.0.0.1 ping statistics --- 00:06:43.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.157 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=99574 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 99574 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 99574 ']' 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:43.157 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:43.157 [2024-07-25 13:34:39.992586] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:43.157 [2024-07-25 13:34:39.992634] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:43.157 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.157 [2024-07-25 13:34:40.033209] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:43.416 [2024-07-25 13:34:40.068975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.416 [2024-07-25 13:34:40.110136] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:43.416 [2024-07-25 13:34:40.110177] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:43.416 [2024-07-25 13:34:40.110186] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:43.416 [2024-07-25 13:34:40.110195] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:43.416 [2024-07-25 13:34:40.110202] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:43.416 [2024-07-25 13:34:40.110302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.416 [2024-07-25 13:34:40.110406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.416 [2024-07-25 13:34:40.110408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.983 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.983 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:43.983 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:43.983 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:43.983 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:43.983 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:43.983 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:43.983 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:44.242 [2024-07-25 13:34:40.997530] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.242 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:44.500 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:44.500 [2024-07-25 13:34:41.378840] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:44.758 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:44.759 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:45.017 Malloc0 00:06:45.017 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:45.276 Delay0 00:06:45.276 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.276 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:45.534 NULL1 00:06:45.534 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:45.793 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:45.793 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=99966 00:06:45.793 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:06:45.793 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.793 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.777 Read completed with error (sct=0, sc=11) 00:06:46.777 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.778 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.037 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:47.037 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:47.296 true 00:06:47.296 13:34:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:06:47.296 13:34:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.233 13:34:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.233 13:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:48.233 13:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:48.492 true 00:06:48.492 13:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:06:48.492 13:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.751 13:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.751 13:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:48.751 13:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:49.009 true 00:06:49.010 13:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:06:49.010 13:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.388 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.388 13:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:50.388 13:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:50.388 true 00:06:50.388 13:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:06:50.388 13:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.359 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.618 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:51.618 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:51.618 true 00:06:51.618 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:06:51.618 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.877 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.135 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:52.135 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:52.135 true 00:06:52.135 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:06:52.135 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.394 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.653 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:52.653 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:52.653 true 00:06:52.653 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:06:52.653 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.911 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.170 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:53.170 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:53.170 true 00:06:53.429 13:34:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:06:53.429 13:34:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.367 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.627 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:54.627 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:54.886 true 00:06:54.886 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:06:54.886 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.875 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.875 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:55.875 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:56.133 true 00:06:56.133 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:06:56.133 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.133 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.393 13:34:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:56.393 13:34:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:56.650 true 00:06:56.650 13:34:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:06:56.650 13:34:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.026 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.026 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:58.026 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:58.026 true 00:06:58.026 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:06:58.026 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.961 13:34:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.220 13:34:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:59.220 13:34:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:59.220 true 00:06:59.220 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:06:59.220 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.479 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.738 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:59.738 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:59.738 true 00:06:59.738 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:06:59.738 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.996 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.255 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:00.255 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:00.255 true 00:07:00.513 13:34:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:07:00.513 13:34:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.514 13:34:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.772 13:34:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:00.772 13:34:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:01.030 true 00:07:01.030 13:34:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:07:01.030 13:34:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.030 13:34:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.289 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:01.289 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:01.548 true 00:07:01.548 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:07:01.548 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.548 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.806 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:01.806 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:02.064 true 00:07:02.064 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:07:02.064 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.064 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.335 [2024-07-25 13:34:59.114909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.114994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.115045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.115086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.115126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.115172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.115213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.115252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.115290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.115327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.115367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.115406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.115452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.115488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.115525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.115571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.115609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.115638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.115677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.115722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.115765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.115804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.115846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.115886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.115926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.115966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.116008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.116046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.335 [2024-07-25 13:34:59.116080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.116111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.116153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.116193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.116233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.116280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.116315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.116355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.116391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.116433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.116475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.116512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.116551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.116592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.116636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.116684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.116731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.116774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.116824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.116867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.116911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.116954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.117000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.117044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.117086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.117126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.117169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.117211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.117254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.117297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.117339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.117381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.117425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.117467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.117520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.118019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.118066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.118113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.118152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.118187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.118227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.118267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.118314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.118350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.118385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.118421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.118464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.118502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.118539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.118583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.118622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.118659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.118695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.118739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.118781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.118818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.118862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.118900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.118935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.118974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.119011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.119048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.119086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.119128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.119165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.119202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.119240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.119281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.119322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.119361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.119397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.119434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.119471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.119507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.119545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.119581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.119618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.119658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.119703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.119750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.119794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.119837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.119891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.119932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.119977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.120018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.120062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.120108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.120151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.120193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.120247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.120287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.120331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.120375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.120419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.120465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.120507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.120549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.120594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.120774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.121111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.121158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.121201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.121242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.121272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.121314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.121354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.121392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.121428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.121472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.121511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.121549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.121592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.121630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.121669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.121706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.121747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.121780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.121821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.121857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.121895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.121931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.121966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.122003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.122037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.336 [2024-07-25 13:34:59.122077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.122115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.122153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.122189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.122225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.122262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.122302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.122344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.122398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.122438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.122478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.122514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.122550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.122593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.122633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.122675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.122718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.122758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.122797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.122842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.122882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.122923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.122968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.123010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.123054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.123097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.123140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.123180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.123220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.123262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.123306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.123350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.123393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.123438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.123477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.123522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.123563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.123605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.124083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.124136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.124174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.124220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.124265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.124310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.124351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.124381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.124422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.124462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.124500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.124540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.124576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.124617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.124655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.124702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.124744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.124787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.124826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.124859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.124901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.124940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.124977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.125016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.125056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.125093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.125130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.125168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.125205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.125242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.125281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.125312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.125350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.125389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.125428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.125468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.125507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.125547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.125585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.125621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.125657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.125698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.125735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.125771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.125806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.125842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.125881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.125924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.125968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.126011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.126056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.126099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.126141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.126183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.126228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.126269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.126315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.126358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.126403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.126446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.126493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.126533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.126582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.126627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.127095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.127145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.127190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.127233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.127276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.127318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.127364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.127409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.127451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.127483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.127530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.127568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.127608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.127647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.127692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.127736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.127778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.127821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.127859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.127898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.127937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.127968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.128004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.337 [2024-07-25 13:34:59.128039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.128076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.128121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.128161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.128198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.128245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.128281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.128318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.128361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.128399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.128429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.128466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.128509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.128551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.128593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.128633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.128669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.128708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.128753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.128791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.128832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.128869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.128911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.128950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.128984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.129025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.129065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.129115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.129163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.129205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.129252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.129295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.129336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.129380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.129423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.129467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.129512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.129559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.129606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.129649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.130114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.130161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.130203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.130245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.130285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.130330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.130369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.130414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.130455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.130500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.130544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.130583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.130617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.130659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.130698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.130742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.130790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.130828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.130872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.130909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.130950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.130991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.131031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.131070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.131101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.131139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.131177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.131214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.131255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.131290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.131327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.131373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.131411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.131448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.131484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.131527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.131562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.131598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.131635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.131676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.131723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.131765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.131805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.131844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.131885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.131928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.131969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.132008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.132048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.132089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.132131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.132189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.132231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.132273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.132315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.132361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.132406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.132448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.132493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.132548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.132596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.132638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.132682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.132726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.133205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.133253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.133295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.133341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.133394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:02.338 [2024-07-25 13:34:59.133436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.133476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.133518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.133570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.133612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.133650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.133689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.133729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.133771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.133808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.133841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.133883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.133920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.133958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.133996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.134041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.134083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.338 [2024-07-25 13:34:59.134120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.134166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.134204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.134242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.134284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.134314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.134352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.134392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.134431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.134471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.134507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.134548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.134584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.134621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.134658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.134698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.134742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.134780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.134819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.134860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.134901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.134940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.134975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.135014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.135053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.135101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.135142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.135183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.135228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.135272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.135315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.135358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.135409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.135451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.135493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.135537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.135586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.135629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.135670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.135717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.135766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.136217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.136260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.136290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.136327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.136366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.136401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.136434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.136470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.136510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.136550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.136589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.136630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.136673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.136713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.136762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.136797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.136835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.136872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.136912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.136948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.136989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.137034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.137075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.137119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.137163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.137205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.137247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.137291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.137344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.137386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.137427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.137466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.137502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.137535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.137573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.137612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.137649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.137685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.137721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.137762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.137800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.137843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.137886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.137927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.137967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.138007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.138046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.138091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.138131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.138175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.138214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.138255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.138296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.138340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.138381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.138423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.138466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.138510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.138550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.138593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.138635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.339 [2024-07-25 13:34:59.138678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.138724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.138770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.139225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.139274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.139323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.139367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.139409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.139453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.139506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.139545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.139586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.139630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.139672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.139719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.139766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.139820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.139863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.139908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.139955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.140000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.140041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.140085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.140122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.140162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.140206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.140242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.140281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.140317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.140358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.140399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.140435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.140482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.140519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.140557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.140594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.140638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.140676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.140719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.140754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.140791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.140828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.140865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.140912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.140952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.140989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.141036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.141073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.141112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.141149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.141190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.141225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.141265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.141304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.141347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.141384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.141422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.141461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.141498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.141536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.141574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.141612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.141654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.141695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.141740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.141785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.142255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.142303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.142346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.142391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.142434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.142481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.142525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.142569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.142611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.142659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.142704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.142753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.142791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.142825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.142861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.142902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.142948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.142988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.143026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.143070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.143108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.143146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.143196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.143227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.143263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.143303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.143341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.143383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.143420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.143458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.143498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.143537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.143572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 13:34:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:02.340 [2024-07-25 13:34:59.143605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.143649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.143689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.143731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.143771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.143807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.143845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.143889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.143934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.143967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 13:34:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:02.340 [2024-07-25 13:34:59.144011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.144055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.144102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.144145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.144187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.144236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.144280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.144325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.144373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.144420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.144463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.144519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.144562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.144607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.144651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.144695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.144741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.144789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.144832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.144879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.144929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.145397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.145445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.145487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.340 [2024-07-25 13:34:59.145530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.145572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.145625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.145666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.145708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.145756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.145800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.145856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.145904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.145952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.145989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.146041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.146077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.146112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.146149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.146190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.146229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.146270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.146313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.146352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.146389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.146430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.146469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.146515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.146549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.146588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.146627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.146666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.146708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.146754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.146802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.146843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.146884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.146923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.146968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.147007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.147044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.147084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.147122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.147157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.147197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.147242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.147282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.147322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.147364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.147400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.147438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.147475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.147513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.147551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.147590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.147630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.147676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.147719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.147767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.147812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.147866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.147906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.147947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.147992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.148443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.148488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.148524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.148567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.148614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.148651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.148689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.148736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.148775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.148811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.148850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.148889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.148930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.148974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.149013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.149050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.149089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.149133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.149177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.149227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.149267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.149311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.149355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.149407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.149447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.149487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.149532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.149573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.149614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.149658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.149695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.149740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.149772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.149809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.149842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.149882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.149921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.149967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.150009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.150052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.150098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.150140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.150180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.150219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.150265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.150306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.150348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.150390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.150432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.150477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.150522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.150570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.150610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.150649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.150700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.150749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.150792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.150834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.150885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.150924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.150966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.151012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.151053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.151101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.151568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.151616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.151659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.151701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.151748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.151792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.151832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.151872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.151913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.151955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.341 [2024-07-25 13:34:59.151996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.152037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.152080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.152122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.152162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.152207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.152246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.152292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.152332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.152374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.152413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.152450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.152487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.152530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.152559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.152596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.152630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.152666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.152707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.152746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.152782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.152818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.152858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.152894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.152930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.152966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.153006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.153043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.153078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.153116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.153151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.153199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.153234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.153270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.153310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.153349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.153385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.153415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.153451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.153486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.153525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.153564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.153603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.153640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.153675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.153711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.153750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.153789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.153822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.153865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.153909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.153944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.153990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.154450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.154499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.154536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.154572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.154608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.154638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.154676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.154718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.154764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.154803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.154836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.154874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.154911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.154951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.154993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.155031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.155069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.155111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.155151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.155196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.155239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.155279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.155313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.155342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.155379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.155419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.155470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.155515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.155558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.155599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.155639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.155682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.155729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.155774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.155816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.155861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.155902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.155944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.155987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.156042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.156082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.156129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.156177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.156221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.156276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.156319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.156361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.156406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.156451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.156494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.156534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.156578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.156624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.156673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.156713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.156757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.156806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.156847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.156890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.156936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.156983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.157022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.342 [2024-07-25 13:34:59.157057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.157095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.157546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.157586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.157626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.157663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.157699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.157750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.157793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.157830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.157860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.157898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.157934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.157971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.158011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.158049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.158092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.158132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.158171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.158214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.158262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.158303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.158347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.158397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.158444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.158488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.158528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.158571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.158619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.158664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.158708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.158753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.158799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.158844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.158886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.158932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.158974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.159014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.159058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.159109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.159150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.159193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.159234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.159277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.159320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.159364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.159406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.159458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.159509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.159549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.159591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.159632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.159677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.159727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.159768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.159807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.159848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.159888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.159920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.159957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.159997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.160034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.160069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.160105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.160151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.160190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.160629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.160674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.160713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.160757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.160798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.160835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.160872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.160904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.160933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.160962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.160990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.161018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.161046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.161081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.161119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.161158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.161197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.161234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.161275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.161314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.161351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.161388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.161429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.161470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.161511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.161553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.161594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.161636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.161685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.161729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.161775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.161823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.161866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.161905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.161946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.161988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.162032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.162081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.162122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.162164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.162206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.162248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.162286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.162331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.162377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.162417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.162461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.162513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.162557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.162595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.162628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.162669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.162718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.162766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.162806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.162849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.162886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.162925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.162962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.162998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.163039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.163084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.163123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.163160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.163703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.163750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.163792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.163834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.163876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.163921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.343 [2024-07-25 13:34:59.163963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.164009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.164049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.164091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.164144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.164185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.164230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.164277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.164320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.164363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.164417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.164459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.164503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.164546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.164585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.164630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.164673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.164722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.164767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.164813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.164855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.164902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.164944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.164994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.165031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.165068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.165103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.165143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.165184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.165221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.165257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.165293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.165322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.165358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.165395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.165432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.165468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.165513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.165557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.165606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.165645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.165678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.165718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.165754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.165789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.165828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.165870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.165909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.165944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.165982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.166025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.166065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.166112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.166152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.166192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.166237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.166282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.166327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.166798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.166840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.166877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.166923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.166958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.166987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.167025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.167064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.167098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.167134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.167170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.167215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.167255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.167291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.167324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.167360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.167395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.167432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.167467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.167503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.167543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.167579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.167620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.167660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.167697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.167739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.167781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.167810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.167848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.167891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.167927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.167969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.168007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.168059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.168102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.168146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.168191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.168237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.168282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.168326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.168370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.168412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.168454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.168511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.168555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.168598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.168639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.168683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.168732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.168778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.168817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.168861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.168904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.168946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.168992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.169031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.169071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.169120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.169158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.169197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.169230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.169267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.169305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.169342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.169880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.169920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.169957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.170000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.170042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.170080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.170119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.170153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.170191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.170229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.170269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.170318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.170361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.344 [2024-07-25 13:34:59.170403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.170448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.170490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.170532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.170576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.170617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.170658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.170703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.170752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.170795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.170835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.170881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.170922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.170962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.171004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.171051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.171096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.171140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.171186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.171234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.171273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.171316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.171359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.171404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.171444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.171491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.171537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.171581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.171622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.171667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.171710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.171757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.171798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.171841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.171887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.171931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.171974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.172013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.172060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.172104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.172155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.172196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.172241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.172283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.172326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.172370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.172413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.172458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.172505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.172544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.173033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.173076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.173113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.173143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.173182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.173217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.173254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.173298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.173339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.173379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.173426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.173463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.173502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.173546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.173583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.173613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.173647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.173689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.173735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.173774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.173815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.173852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.173892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.173934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.173973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.174009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.174049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.174084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.174119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.174154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.174188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.174226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.174264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.174305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.174341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.174380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.174426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.174468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.174509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.174555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.174597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.174638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.174682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.174727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.174773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.174818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.174861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.174914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.174955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.175000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.175042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.175084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.175124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.175167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.175213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.175251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.175297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.175330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.175371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.175410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.175448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.175487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.175526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.175572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.176008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.176047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.176085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.176123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.176161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.176200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.176243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.176283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.176319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.176355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.176393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.176428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.176465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.345 [2024-07-25 13:34:59.176503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.176537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.176582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.176626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.176668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.176721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.176765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.176807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.176849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.176895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.176938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.176978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.177021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.177070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.177114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.177157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.177202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.177247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.177287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.177333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.177381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.177421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.177460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.177505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.177556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.177600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.177644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.177687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.177735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.177781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.177823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.177868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.177914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.177958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.178006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.178052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.178097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.178145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.178186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.178226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.178265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.178303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.178333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.178371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.178411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.178451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.178488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.178527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.178564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.178601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.179069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.179113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.179153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.179195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.179234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.179271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.179310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.179349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.179388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.179427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.179461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.179510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.179551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.179594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.179637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.179693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.179739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.179785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.179830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.179874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.179920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.179963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.180005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.180051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.180094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.180146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.180185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.180228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.180271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.180319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.180367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.180410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.180456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.180503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.180551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.180596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.180644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.180689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.180736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.180780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.180829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.180872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.180914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.180958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.181002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.181050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.181090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.181125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.181170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.181201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.181240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.181281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.181317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.181352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.181393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.181431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.181469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.181515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.181554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.181599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.181639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.181671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.181707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.181758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.182223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.182272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.182309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.182353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.182390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:02.346 [2024-07-25 13:34:59.182427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.182468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.182505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.182542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.346 [2024-07-25 13:34:59.182586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.182632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.182674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.182719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.182766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.182815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.182857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.182904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.182959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.183009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.183060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.183106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.183150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.183198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.183241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.183299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.183345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.183388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.183431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.183473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.183521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.183567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.183611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.183656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.183699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.183735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.183777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.183823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.183858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.183895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.183945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.183984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.184021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.184063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.184099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.184133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.184170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.184208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.184246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.184292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.184331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.184371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.184417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.184458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.184496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.184533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.184573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.184612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.184649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.184687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.184727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.184765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.184805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.184844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.185322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.185369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.185412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.185455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.185497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.185543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.185587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.185630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.185679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.185729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.185774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.185814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.185856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.185904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.185947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.185994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.186037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.186087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.186128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.186175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.186219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.186264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.186309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.186359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.186404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.186450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.186492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.186538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.186581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.186624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.186666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.186713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.186766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.186808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.186856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.186899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.186943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.186993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.187040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.187086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.187126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.187162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.187194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.187232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.187269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.187314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.187351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.187390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.187432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.187469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.187508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.187551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.187587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.187628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.187659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.187696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.187739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.187777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.187819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.187858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.187901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.187940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.187977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.188025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.188496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.188540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.188581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.188617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.188657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.188699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.188740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.188780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.188821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.188856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.188900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.188951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.188999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.189044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.189089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.189131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.347 [2024-07-25 13:34:59.189177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.189222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.189268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.189314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.189364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.189406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.189454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.189501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.189549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.189594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.189637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.189680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.189727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.189778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.189823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.189866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.189912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.189956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.189987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.190027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.190065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.190105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.190144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.190184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.190228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.190266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.190314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.190352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.190382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.190420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.190457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.190494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.190531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.190577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.190615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.190654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.190699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.190732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.190771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.190808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.190845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.190887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.190925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.190961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.190998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.191036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.191073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.191533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.191579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.191632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.191685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.191735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.191783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.191827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.191870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.191920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.191967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.192020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.192061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.192101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.192147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.192193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.192246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.192290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.192333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.192378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.192422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.192466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.192516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.192560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.192609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.192651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.192696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.192741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.192790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.192834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.192879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.192923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.192968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.193014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.193057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.193101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.193150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.193194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.193235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.193279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.193320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.193366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.193403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.193437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.193477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.193515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.193552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.193595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.193639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.193679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.193721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.193766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.193802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.193839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.193879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.193913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.193948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.193986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.194024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.194072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.194111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.194150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.194194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.194232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.194271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.194762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.194804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.194843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.194885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.194922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.194964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.195002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.195039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.195075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.195112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.195156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.195202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.195248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.195298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.195341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.195384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.195429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.195476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.195519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.195566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.195614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.195661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.348 [2024-07-25 13:34:59.195707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.195754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.195800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.195844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.195889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.195942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.195985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.196032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.196077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.196121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.196165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.196207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.196249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.196281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.196321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.196360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.196399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.196438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.196483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.196520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.196558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.196600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.196640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.196673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.196710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.196764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.196800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.196837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.196878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.196915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.196953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.196989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.197029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.197068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.197108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.197148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.197186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.197223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.197262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.197299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.197339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.197807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.197854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.197899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.197945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.197989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.198038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.198090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.198128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.198173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.198215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.198261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.198311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.198354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.198407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.198451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.198491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.198536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.198578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.198634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.198676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.198720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.198765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.198810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.198850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.198896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.198944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.198994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.199037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.199079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.199124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.199172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.199219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.199263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.199309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.199363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.199411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.199455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.199496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.199542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.199588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.199630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.199667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.199700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.199737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.199778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.199813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.199860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.199900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.199939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.199985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.200024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.200065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.200101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.200138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.200175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.200208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.200248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.200288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.200325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.200371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.200411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.200449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.200494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.200534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.201031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.201073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.201113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.201156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.201195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.201231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.201268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.201306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.201348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.201388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.349 [2024-07-25 13:34:59.201432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.201481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.201526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.201570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.201617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.201669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.201713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.201760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.201804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.201849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.201896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.201942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.201986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.202029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.202072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.202121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.202167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.202213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.202259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.202305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.202348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.202395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.202437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.202484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.202524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.202560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.202599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.202639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.202678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.202719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.202761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.202805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.202851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.202889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.202928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.202969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.203002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.203039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.203085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.203127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.203177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.203218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.203256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.203290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.203326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.203368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.203404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.203444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.203483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.203521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.203560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.203599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.203637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.204121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.204170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.204213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.204259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.204303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.204350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.204394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.204449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.204491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.204538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.204583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.204629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.204675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.204723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.204764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.204805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.204852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.204901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.204948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.204995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.205041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.205083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.205132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.205173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.205216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.205260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.205306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.205355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.205402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.205445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.205488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.205533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.205576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.205616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.205653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.205692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.205731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.205770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.205810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.205846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.205883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.205921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.205960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.205998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.206044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.206085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.206121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.206164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.206205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.206240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.206278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.206327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.206364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.206402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.206441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.206476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.206512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.206557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.206597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.206630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.206666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.206703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.206748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.206789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.207282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.207334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.207382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.207426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.207475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.207519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.207563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.207609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.207654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.207697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.207749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.207790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.207835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.207878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.207923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.207971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.208017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.208060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.350 [2024-07-25 13:34:59.208104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.208139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.208178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.208220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.208257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.208302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.208339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.208378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.208426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.208462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.208492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.208529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.208564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.208604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.208647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.208689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.208733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.208770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.208810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.208852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.208894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.208932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.208969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.209007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.209047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.209087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.209127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.209165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.209204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.209242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.209282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.209323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.209365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.209409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.351 [2024-07-25 13:34:59.209454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.612 [2024-07-25 13:34:59.209498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.612 [2024-07-25 13:34:59.209544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.612 [2024-07-25 13:34:59.209594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.612 [2024-07-25 13:34:59.209637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.612 [2024-07-25 13:34:59.209682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.612 [2024-07-25 13:34:59.209727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.612 [2024-07-25 13:34:59.209770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.612 [2024-07-25 13:34:59.209818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.612 [2024-07-25 13:34:59.209863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.612 [2024-07-25 13:34:59.209911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.612 [2024-07-25 13:34:59.210387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.612 [2024-07-25 13:34:59.210435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.612 [2024-07-25 13:34:59.210480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.612 [2024-07-25 13:34:59.210525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.612 [2024-07-25 13:34:59.210569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.612 [2024-07-25 13:34:59.210620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.612 [2024-07-25 13:34:59.210665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.612 [2024-07-25 13:34:59.210710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.612 [2024-07-25 13:34:59.210759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.612 [2024-07-25 13:34:59.210803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.612 [2024-07-25 13:34:59.210852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.612 [2024-07-25 13:34:59.210894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.210940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.210987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.211031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.211076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.211120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.211170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.211216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.211257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.211305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.211346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.211396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.211449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.211493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.211541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.211581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.211625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.211665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.211709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.211747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.211790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.211827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.211869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.211914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.211955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.211992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.212032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.212068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.212109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.212152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.212187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.212222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.212251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.212292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.212333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.212372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.212410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.212446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.212481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.212522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.212568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.212609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.212647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.212684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.212724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.212759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.212794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.212832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.212868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.212911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.212951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.212990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.213028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.213518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.213566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.213606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.213646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.213688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.213733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.213776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.213825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.213868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.213911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.213952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.213991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.214034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.214081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.214124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.214169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.214212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.214252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.214290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.214330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.214374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.214416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.214464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.214504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.214537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.214573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.214613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.214651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.214693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.214735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.214772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.214814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.613 [2024-07-25 13:34:59.214844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.214882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.214923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.214961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.214996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.215033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.215069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.215110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.215141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.215179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.215219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.215257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.215297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.215336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.215375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.215412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.215450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.215494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.215529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.215567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.215603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.215643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.215677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.215721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.215768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.215808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.215853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.215896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.215937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.215982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.216030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.216498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.216544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.216585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.216632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.216671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.216713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.216758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.216801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.216845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.216890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.216935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.216973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.217015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.217052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.217095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.217138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.217180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.217222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.217264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.217309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.217351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.217400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.217445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.217489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.217531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.217570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.217615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.217658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.217704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.217746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.217786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.217842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.217881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.217924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.217969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.218010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.218049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.218091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.218130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.218160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.218198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.218240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.218276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.218311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.218349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.218387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.218430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.218468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.218505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.218545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.218582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.218618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.218654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.218694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.218734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.218771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.218808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.218853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.218893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.218932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.218971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.219010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.219046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.219086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.614 [2024-07-25 13:34:59.219614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.219654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.219694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.219732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.219769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.219809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.219844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.219880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.219920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.219954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.219993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.220033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.220076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.220119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.220162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.220201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.220248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.220293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.220332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.220375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.220414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.220456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.220512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.220553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.220596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.220638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.220680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.220723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.220772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.220813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.220854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.220896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.220925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.220964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.221005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.221051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.221091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.221128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.221163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.221202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.221235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.221264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.221303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.221338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.221377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.221415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.221450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.221495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.221533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.221565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.221601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.221635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.221672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.221712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.221754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.221791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.221830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.221869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.221910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.221948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.221991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.222032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.222072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.222546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.222593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.222644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.222686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.222736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.222779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.222825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.222866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.222913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.222957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.222995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.223036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.223080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.223121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.223160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.223203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.223244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.223288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.223335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.223376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.223416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.223458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.223497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.223536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.223577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.223617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.223662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.223701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.223751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.615 [2024-07-25 13:34:59.223792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.616 [2024-07-25 13:34:59.223837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.616 [2024-07-25 13:34:59.223881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.616 [2024-07-25 13:34:59.223924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.616 [2024-07-25 13:34:59.223966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.616 [2024-07-25 13:34:59.224008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.616 [2024-07-25 13:34:59.224050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.616 [2024-07-25 13:34:59.224093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.616 [2024-07-25 13:34:59.224136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.616 [2024-07-25 13:34:59.224174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.616 [2024-07-25 13:34:59.224217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.616 [2024-07-25 13:34:59.224260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.616 [2024-07-25 13:34:59.224302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.616 [2024-07-25 13:34:59.224346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.616 [2024-07-25 13:34:59.224384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.616 [2024-07-25 13:34:59.224423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.616 [2024-07-25 13:34:59.224459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.616 [2024-07-25 13:34:59.224506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.616 [2024-07-25 13:34:59.224544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.616 [2024-07-25 13:34:59.224574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.616 [2024-07-25 13:34:59.224615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.616 [2024-07-25 13:34:59.224653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.616 [2024-07-25 13:34:59.224690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.616 [2024-07-25 13:34:59.224727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.616 [2024-07-25 13:34:59.224765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.566021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.566087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.566151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.566212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.566272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.566332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.566389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.566450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.566511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.566582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.567671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.567738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.567795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.567853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.567905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.567958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.568017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.568067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.568119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.568175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.568225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.568274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.568328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.568383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.568437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.568493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.568540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.568590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.568646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.568699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.568758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.568811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.568868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.568923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.568974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.569027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.569083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.569143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.569202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.569265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.569328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.569397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.569457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.569519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.569582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.569640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.569704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.569768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.569834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.569898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.569955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.570013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.570076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.570136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.570194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.570250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.570304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.570358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.570412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.570461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.570523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.570585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.570635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.570682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.570745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.570801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.570852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.570909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.570962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.571012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.571065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.571121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.571176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.571224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.571468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.571521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.571576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.571635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.571705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.571769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.906 [2024-07-25 13:34:59.571828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.571886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.571946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.572005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.572067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.572130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.572188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.572252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.572306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.572373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.572434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.572495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.572558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.572615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.572674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.572737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.572801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.572858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.572917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.572975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.573036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.573093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.573158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.573217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.573278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.573337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.573395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.573457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.573515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.573573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.573616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.573665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.573712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.573774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.573822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.573873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.573927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.573977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.574029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.574078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.574134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.574189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.574245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.574300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.574349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.574402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.574458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.574507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.574562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.574616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.574666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.574723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.574775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.574829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.574883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.574932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.574983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:02.907 [2024-07-25 13:34:59.575622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.575688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.575766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.575833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.575891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.575952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.576010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.576073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.576129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.576188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 true 00:07:02.907 [2024-07-25 13:34:59.576250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.576316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.576368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.576419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.576471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.576526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.576585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.576640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.576693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.576755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.576805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.576861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.576916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.576970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.577024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.577076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.577128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.577184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.907 [2024-07-25 13:34:59.577236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.577290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.577341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.577393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.577449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.577505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.577548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.577587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.577625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.577662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.577708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.577760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.577806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.577851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.577898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.577945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.577991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.578038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.578082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.578128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.578173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.578218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.578262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.578307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.578353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.578401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.578446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.578505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.578554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.578603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.578650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.578696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.578742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.578790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.578835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.578880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.579069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.579422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.579469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.579513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.579560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.579600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.579642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.579674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.579713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.579761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.579810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.579847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.579893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.579931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.579980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.580020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.580067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.580116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.580154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.580191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.580229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.580275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.580316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.580363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.580400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.580442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.580480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.580520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.580561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.580599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.580636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.580675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.580723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.580760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.580802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.580842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.580883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.580932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.580971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.581011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.581054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.581092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.581139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.581181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.581227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.581265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.581313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.908 [2024-07-25 13:34:59.581363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.581422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.581469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.581516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.581560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.581608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.581656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.581703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.581751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.581797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.581847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.581896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.581949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.582002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.582053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.582098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.582145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.582620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.582665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.582709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.582756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.582796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.582835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.582883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.582918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.582960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.582999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.583038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.583079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.583121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.583160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.583200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.583237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.583270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.583312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.583353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.583395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.583440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.583480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.583523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.583570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.583612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.583651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.583696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.583741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.583781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.583816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.583861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.583904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.583949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.583994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.584040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.584086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.584134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.584180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.584231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.584281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.584331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.584380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.584426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.584469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.584517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.584563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.584608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.584654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.584708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.584761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.584814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.584862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.584912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.584957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.585006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.585058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.585106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.585152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.585196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.585244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.585289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.585337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.585381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.585439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.585617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.585962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.586008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.586055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.586093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.586133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.586171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.586215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.586254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.586293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.586330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.909 [2024-07-25 13:34:59.586371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.586408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.586447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.586491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.586532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.586570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.586610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.586651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.586688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.586730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.586770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.586810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.586851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.586894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.586944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.587003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.587048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.587093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.587142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.587187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.587236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.587283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.587328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.587374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.587418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.587466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.587512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.587559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.587607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.587654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.587698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.587745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.587792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.587839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.587886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.587931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.587979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.588022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.588068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.588115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.588168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.588218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.588268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.588315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.588364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.588402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.588438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.588480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.588521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.588561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.588602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.588650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.589113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.589162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.589209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.589245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.589292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.589332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.589372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.589411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.589451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.589493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.589533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.589577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.589616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.589653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.589691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.589735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.589775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.589814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.589858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.589898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.589940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.589985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.590034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.590081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.590125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.590172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.590220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.590265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.910 [2024-07-25 13:34:59.590310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.590357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.590403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.590453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.590513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.590561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.590609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.590658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.590706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.590754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.590801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.590845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.590892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.590939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.590986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.591034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.591080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.591125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.591171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.591214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.591264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.591314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.591371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.591417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.591468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.591514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.591548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.591587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.591628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.591668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.591708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.591752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.591792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.591831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.591873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.591916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.592086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.592423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.592461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.592503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.592540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.592583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.592624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.592663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.592703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.592748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.592786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.592829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.592868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.592909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.592948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.592988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.593029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.593075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.593121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.593164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.593206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.593253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.593301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.593350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.593395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.593443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.593489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.593552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.593599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.593644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.593689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.593737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.593788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.593833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.593878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.593924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.911 [2024-07-25 13:34:59.593970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.594025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.594078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.594124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.594172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.594217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.594264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.594313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.594360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.594404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.594450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.594500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.594545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.594586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.594620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.594660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.594705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.594750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.594794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.594834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.594874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.594922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.594962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.595006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.595045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.595079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.595116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.595558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.595600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.595644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.595682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.595725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.595766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.595803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.595845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.595887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.595926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.595964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.596004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.596045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.596095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.596144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.596189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.596234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.596278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.596324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.596375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.596424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.596469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.596518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.596566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.596610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.596655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.596705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.596758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.596802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.596853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.596909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.596954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.597002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.597048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.597094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.597141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.597186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.597231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.597276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.597322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.597369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.597417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 13:34:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:07:02.912 [2024-07-25 13:34:59.597472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.597519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.597563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.597615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.597659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 13:34:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.912 [2024-07-25 13:34:59.597693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.597737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.597778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.597820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.912 [2024-07-25 13:34:59.597859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.597901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.597941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.597980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.598019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.598060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.598100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.598137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.598181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.598219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.598262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.598300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.598339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.598508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.598919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.598962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.599004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.599048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.599090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.599135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.599181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.599226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.599269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.599313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.599357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.599407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.599452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.599495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.599540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.599587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.599634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.599683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.599739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.599784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.599828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.599875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.599923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.599968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.600012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.600061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.600105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.600148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.600194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.600240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.600288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.600339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.600382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.600428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.600475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.600521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.600568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.600613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.600655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.600699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.600744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.600786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.600824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.600859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.600901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.600942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.600991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.601030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.601078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.601118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.601168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.601207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.601253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.601292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.601324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.601364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.601403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.601442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.601483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.601521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.601564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.601605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.602143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.602193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.602243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.602286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.602334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.602383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.602430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.602475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.602519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.602562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.602608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.602654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.602706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.602759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.602803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.602856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.602903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.913 [2024-07-25 13:34:59.602942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.602980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.603021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.603059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.603098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.603139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.603177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.603214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.603260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.603297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.603328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.603370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.603406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.603445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.603483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.603520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.603561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.603603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.603643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.603687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.603731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.603773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.603814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.603858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.603897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.603936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.603976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.604020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.604066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.604115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.604163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.604211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.604264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.604311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.604358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.604405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.604455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.604505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.604548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.604594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.604642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.604688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.604744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.604794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.604839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.604886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.604933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.605124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.605174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.605517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.605566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.605617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.605666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.605710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.605761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.605802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.605846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.605892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.605936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.605977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.606022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.606066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.606106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.606138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.606180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.606227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.606271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.606309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.606347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.606390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.606428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.606465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.606503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.606542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.606591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.606622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.606658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.606696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.606736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.606773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.606813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.606852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.606901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.606941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.606980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.607028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.607066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.607104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.607140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.607174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.607221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.607255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.607295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.914 [2024-07-25 13:34:59.607335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.607373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.607408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.607445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.607482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.607520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.607557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.607596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.607631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.607670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.607706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.607759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.607799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.607842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.607886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.607928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.607976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.608017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.608494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.608540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.608583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.608627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.608668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.608712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.608765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.608815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.608858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.608900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.608933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.608970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.609010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.609048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.609088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.609125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.609162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.609208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.609248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.609284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.609325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.609359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.609400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.609445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.609494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.609532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.609569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.609612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.609652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.609687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.609732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.609768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.609807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.609847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.609883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.609925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.609963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.610003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.610038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.610076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.610114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.610148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.610191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.610238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.610284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.610327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.610368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.610418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.610462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.610507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.610554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.610597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.610644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.610692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.610743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.610787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.610830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.610881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.610924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.610962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.611006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.611056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.611102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.611145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.611327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.611671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.611722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.611771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.611813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.611854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.611894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.611933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.611974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.612013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.612051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.612091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.612122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.612163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.612206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.915 [2024-07-25 13:34:59.612243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.612280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.612325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.612363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.612403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.612456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.612494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.612532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.612577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.612608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.612646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.612681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.612723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.612768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.612806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.612845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.612883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.612919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.612965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.612999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.613036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.613076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.613118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.613158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.613199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.613242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.613281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.613320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.613357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.613397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.613437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.613476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.613523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.613564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.613606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.613649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.613695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.613746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.613789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.613831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.613880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.613929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.613974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.614014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.614054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.614106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.614148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.614190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.614236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.614693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.614749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.614789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.614837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.614878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.614911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.614946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.614985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.615021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.615059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.615101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.615138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.615178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.615219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.615259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.615300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.615352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.615393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.615431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.615469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.615505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.615552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.615599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.615647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.916 [2024-07-25 13:34:59.615689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.615732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.615781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.615827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.615870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.615914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.615955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.615991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.616032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.616068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.616104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.616151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.616188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.616225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.616268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.616309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.616350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.616390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.616435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.616472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.616510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.616548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.616586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.616627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.616668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.616722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.616769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.616814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.616854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.616900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.616945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.616988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.617031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.617074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.617122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.617162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.617204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.617251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.617299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.617341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.617520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.617867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.617917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.617962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.618004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.618045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.618083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.618127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.618171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.618213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.618259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.618308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.618349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.618390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.618439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.618483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.618525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.618580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.618626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.618666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.618710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.618762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.618816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.618859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.618901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.618945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.618991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.619033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.619074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.619120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.619161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.619203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.619245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.619292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.619333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.619373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.619416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.619457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.619494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.619536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.619574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.619613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.619655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.619690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.619730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.619767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.619807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.619844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.619884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.619928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.619965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.620003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.620050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.620086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.620122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.620158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.620196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.620233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.917 [2024-07-25 13:34:59.620272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.620317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.620359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.620398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.620446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.620489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.620933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.620973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.621012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.621049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.621088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.621126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.621166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.621202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.621242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.621279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.621318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.621355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.621393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.621430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.621468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.621505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.621547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.621591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.621637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.621681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.621728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.621774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.621820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.621863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.621909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.621951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.621983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.622022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.622057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.622099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.622136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.622175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.622206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.622247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.622289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.622328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.622365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.622401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.622442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.622478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.622514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.622555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.622592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.622632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.622675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.622721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.622760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.622801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.622838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.622878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.622917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.622962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.623003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.623049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.623094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.623136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.623175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.623221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.623274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.623321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.623364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.623406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.623450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.623509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.623998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.624051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.624098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.624138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.624181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.624223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.624265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:02.918 [2024-07-25 13:34:59.624309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.624354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.624394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.624435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.624486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.624531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.624568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.624613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.624651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.624688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.624732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.624769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.624800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.624837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.624874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.624912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.624952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.624997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.625038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.625088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.918 [2024-07-25 13:34:59.625128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.625164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.625203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.625241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.625274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.625317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.625356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.625393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.625428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.625467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.625505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.625546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.625585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.625626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.625664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.625701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.625741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.625777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.625819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.625862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.625904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.625947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.625990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.626036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.626078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.626122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.626163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.626210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.626254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.626297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.626343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.626390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.626434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.626485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.626532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.626581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.626625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.627092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.627140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.627187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.627231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.627274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.627319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.627360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.627396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.627435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.627478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.627515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.627560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.627600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.627646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.627683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.627734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.627781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.627821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.627856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.627896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.627932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.627971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.628013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.628052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.628107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.628148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.628189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.628235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.628276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.628309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.628347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.628387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.628427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.628464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.628507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.628551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.628591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.628631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.628670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.628710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.628750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.628792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.628837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.628888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.628930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.628977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.629020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.629062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.629105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.629154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.629203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.629258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.629302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.629348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.629396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.629443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.629489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.629534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.629579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.629627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.919 [2024-07-25 13:34:59.629672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.629723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.629774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.629820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.630285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.630334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.630377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.630416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.630453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.630492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.630528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.630565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.630606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.630645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.630688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.630735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.630777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.630816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.630855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.630898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.630946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.630978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.631018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.631059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.631098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.631143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.631183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.631225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.631263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.631305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.631346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.631386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.631432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.631472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.631508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.631549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.631589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.631637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.631674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.631719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.631759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.631807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.631853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.631898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.631941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.631985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.632032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.632076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.632124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.632171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.632218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.632265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.632312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.632360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.632401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.632444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.632495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.632539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.632574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.632615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.632653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.632693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.632736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.632779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.632817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.632854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.632891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.632935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.633458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.633502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.633545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.633585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.633626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.633670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.633710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.633758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.633800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.633844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.633891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.633939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.633985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.634038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.920 [2024-07-25 13:34:59.634090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.634137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.634180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.634226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.634270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.634315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.634360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.634404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.634449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.634496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.634540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.634588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.634640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.634686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.634734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.634784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.634830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.634872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.634920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.634967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.635011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.635057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.635105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.635154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.635199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.635245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.635293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.635338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.635380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.635420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.635465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.635512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.635544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.635586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.635628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.635669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.635712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.635762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.635803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.635851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.635899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.635940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.635982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.636023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.636063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.636101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.636143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.636185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.636233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.636775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.636815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.636863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.636908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.636953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.636996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.637044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.637089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.637135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.637186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.637234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.637279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.637325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.637371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.637415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.637459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.637506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.637568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.637615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.637661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.637719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.637762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.637806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.637854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.637897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.637945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.637997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.638045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.638097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.638142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.638188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.638233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.638284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.638328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.638375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.638415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.638450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.638487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.638527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.638570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.638612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.638653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.638692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.638735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.638779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.638819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.638866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.638901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.638941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.921 [2024-07-25 13:34:59.638980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.639023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.639075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.639116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.639161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.639200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.639246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.639285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.639325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.639358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.639398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.639445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.639488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.639531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.639573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.639762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.640101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.640146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.640190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.640234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.640285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.640340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.640383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.640431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.640477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.640523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.640570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.640615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.640661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.640708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.640759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.640806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.640863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.640914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.640966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.641008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.641055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.641101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.641151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.641195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.641240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.641287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.641333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.641388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.641432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.641477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.641517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.641564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.641605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.641639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.641678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.641720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.641765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.641803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.641852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.641891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.641937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.641976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.642018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.642061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.642098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.642135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.642172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.642210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.642248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.642289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.642326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.642364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.642403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.642447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.642486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.642527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.642567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.642610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.642653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.642694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.642735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.642776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.642814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.643283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.643337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.643392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.643440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.643486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.643531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.643575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.643618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.643672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.643706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.643746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.643784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.643822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.643858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.643899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.643939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.922 [2024-07-25 13:34:59.643978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.644009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.644049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.644084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.644130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.644168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.644207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.644246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.644281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.644323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.644363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.644404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.644444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.644490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.644530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.644572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.644612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.644656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.644694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.644740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.644777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.644822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.644871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.644915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.644962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.645007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.645053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.645098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.645146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.645198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.645243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.645289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.645339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.645383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.645428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.645474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.645520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.645566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.645609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.645656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.645705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.645755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.645803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.645854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.645899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.645946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.645999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.646042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.646505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.646544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.646581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.646621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.646664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.646718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.646760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.646803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.646849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.646887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.646925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.646963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.647011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.647042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.647083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.647122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.647169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.647209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.647248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.647286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.647330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.647374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.647416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.647458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.647503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.647539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.647579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.647618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.647656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.647693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.647740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.647783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.647829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.647874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.647923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.647975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.648019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.648067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.648113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.648160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.648207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.648255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.648300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.648345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.648393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.648438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.648483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.648529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.648575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.648620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.923 [2024-07-25 13:34:59.648675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.648725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.648773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.648821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.648864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.648907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.648952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.648999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.649056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.649104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.649149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.649198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.649243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.649283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.649774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.649810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.649848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.649884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.649922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.649968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.650007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.650048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.650088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.650128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.650167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.650206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.650239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.650277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.650321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.650359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.650411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.650452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.650489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.650528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.650568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.650608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.650647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.650687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.650733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.650770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.650815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.650866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.650918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.650962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.651004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.651052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.651099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.651144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.651189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.651231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.651278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.651326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.651377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.651428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.651481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.651526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.651571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.651616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.651661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.651705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.651756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.651803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.651847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.651893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.651941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.651985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.652031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.652076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.652123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.652172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.652224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.652269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.652312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.652360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.652392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.652434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.652475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.652520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.652952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.652994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.653034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.653071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.653113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.653153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.653190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.653237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.653276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.653309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.653347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.653388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.653432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.653475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.653515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.653555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.653602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.653642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.924 [2024-07-25 13:34:59.653682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.653727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.653767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.653804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.653841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.653888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.653933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.653978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.654024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.654069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.654117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.654162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.654211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.654267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.654312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.654358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.654401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.654445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.654492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.654540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.654586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.654632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.654680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.654732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.654790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.654838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.654883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.654926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.654972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.655020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.655066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.655119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.655165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.655206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.655252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.655298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.655343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.655387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.655427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.655464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.655504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.655545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.655583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.655622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.655662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.656122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.656172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.656212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.656257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.656295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.656336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.656368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.656407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.656445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.656487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.656532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.656577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.656617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.656660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.656696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.656739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.656781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.656819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.656856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.656893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.656936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.656981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.657026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.657079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.657124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.657170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.657215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.657262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.657306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.657355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.657401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.657451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.657498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.657543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.657589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.657643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.657688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.657737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.657782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.925 [2024-07-25 13:34:59.657827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.657874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.657924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.657975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.658031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.658078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.658122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.658170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.658215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.658261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.658305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.658349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.658397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.658439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.658485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.658519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.658558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.658596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.658636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.658675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.658718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.658762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.658799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.658840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.658883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.659322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.659373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.659412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.659445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.659488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.659526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.659564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.659603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.659642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.659684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.659728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.659771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.659809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.659846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.659887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.659925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.659963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.660007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.660054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.660100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.660146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.660190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.660235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.660300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.660339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.660383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.660429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.660475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.660522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.660566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.660608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.660656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.660701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.660751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.660798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.660844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.660891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.660935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.660979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.661023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.661071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.661114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.661160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.661208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.661255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.661300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.661348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.661400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.661446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.661495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.661542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.661575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.661616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.661654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.661694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.661738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.661788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.661827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.661866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.661913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.661961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.662003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.662041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.662504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.662551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.662590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.662630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.662670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.662711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.662755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.662793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.662833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.926 [2024-07-25 13:34:59.662873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.662909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.662949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.662995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.663036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.663081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.663128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.663176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.663219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.663264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.663310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.663357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.663404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.663450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.663498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.663546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.663596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.663638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.663685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.663736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.663784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.663831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.663880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.663928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.663975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.664022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.664066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.664108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.664152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.664197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.664245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.664291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.664342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.664389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.664434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.664479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.664526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.664573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.664621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.664661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.664699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.664744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.664785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.664839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.664881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.664922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.664971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.665012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.665054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.665099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.665139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.665172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.665214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.665254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.665296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.665765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.665812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.665853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.665893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.665932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.665970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.666013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.666051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.666090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.666133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.666173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.666217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.666267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.666315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.666362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.666406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.666448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.666491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.666536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.666580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.666628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.666673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.666724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.666769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.666814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.666860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.666907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.666953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.667005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.667049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.667094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.667143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.667186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.667235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.667279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.667323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.667370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.667413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.667458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.667507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.667555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.667599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.927 [2024-07-25 13:34:59.667643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.667691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.667742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.667777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.667813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.667855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.667894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.667942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.667981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.668023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.668063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.668103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.668146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.668185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.668223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.668255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.668294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.668344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.668382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.668427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.668470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.668980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.669026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.669066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.669106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.669145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.669185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.669225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.669264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.669299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.669343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.669389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.669444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.669487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.669533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.669583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.669628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.669672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.669720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.669770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.669814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.669858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.669905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.669952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.669996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.670043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.670097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.670145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.670190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.670233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.670278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.670323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.670365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.670412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.670467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.670515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.670561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.670607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.670654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.670696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.670745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.670795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.670842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.670887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.670918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.670960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.671000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.671040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.671078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.671119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.671158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.671198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.671247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.671288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.671329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.671366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.671408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.671444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.671488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.671526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.671568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.671606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.671648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.671688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.671736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.672290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.672330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.672372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.672416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.672461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.672507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.672553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.672599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.672642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.672688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.672735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.672782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.928 [2024-07-25 13:34:59.672827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.672871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.672915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.672960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.673005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.673049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.673095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.673144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.673197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.673243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.673291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.673336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.673384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.673433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.673478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.673523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.673568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.673613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.673660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.673706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.673759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.673810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.673859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.673908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.673950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.673996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.674035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.674079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.674113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.674152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.674194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.674235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.674276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.674317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.674358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.674398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.674438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.674479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.674523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.674563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.674600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.674638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.674675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.674722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.674767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.674814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.674847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.674885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.674928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.674968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.675010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.675054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.675528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.675580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.675625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.675670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:02.929 [2024-07-25 13:34:59.675719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.675765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.675817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.675861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.675910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.675958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.676005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.676049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.676092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.676138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.676185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.676220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.676266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.676308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.676350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.676387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.676437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.676477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.676520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.676558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.676596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.676645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.676682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.676726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.676769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.676811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.676850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.676892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.676935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.676972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.677017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.929 [2024-07-25 13:34:59.677064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.677109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.677154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.677191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.677229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.677268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.677307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.677354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.677395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.677441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.677487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.677528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.677575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.677622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.677670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.677720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.677764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.677810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.677857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.677904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.677951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.678004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.678049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.678095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.678143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.678186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.678232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.678278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.678323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.678797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.678858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.678905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.678950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.678997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.679043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.679088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.679132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.679180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.679224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.679269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.679314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.679358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.679407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.679451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.679498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.679547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.679594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.679642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.679686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.679731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.679774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.679818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.679865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.679901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.679941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.679982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.680027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.680067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.680113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.680151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.680203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.680242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.680283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.680326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.680368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.680400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.680440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.680482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.680521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.680562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.680603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.680642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.680683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.680728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.680769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.680811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.680843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.680882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.680918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.680965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.681005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.681050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.681086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.681129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.681171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.681215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.681257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.681299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.681341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.681382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.681423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.681462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.681931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.681980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.930 [2024-07-25 13:34:59.682026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.682069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.682115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.682160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.682207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.682256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.682310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.682356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.682403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.682436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.682474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.682515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.682552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.682592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.682634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.682672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.682720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.682757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.682799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.682839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.682882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.682923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.682969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.683006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.683047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.683088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.683127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.683165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.683206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.683246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.683286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.683326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.683365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.683406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.683444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.683485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.683527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.683572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.683620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.683669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.683721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.683770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.683819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.683869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.683918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.683963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.684012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.684057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.684104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.684154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.684200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.684250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.684297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.684341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.684388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.684434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.684486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.684531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.684577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.684626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.684671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.684719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.685180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.685228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.685274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.685318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.685362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.685406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.685450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.685493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.685540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.685583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.685626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.685669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.685719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.685761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.685793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.685834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.685877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.685918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.685956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.685995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.686035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.686074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.686112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.686152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.686196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.686236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.686273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.686314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.686352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.686390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.686429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.686472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.686511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.686556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.686595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.931 [2024-07-25 13:34:59.686639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.686680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.686723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.686766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.686804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.686844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.686884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.686927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.686968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.687007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.687051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.687089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.687128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.687164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.687208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.687248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.687292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.687340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.687385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.687427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.687479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.687533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.687582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.687638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.687683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.687730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.687775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.687820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.687870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.688342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.688381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.688418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.688458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.688498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.688537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.688579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.688618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.688664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.688702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.688740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.688783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.688821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.688866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.688907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.688957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.689000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.689043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.689078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.689114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.689154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.689194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.689234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.689274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.689313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.689352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.689391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.689434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.689476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.689518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.689557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.689601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.689645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.689690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.689738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.689785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.689830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.689873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.689916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.689963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.690007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.690054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.690101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.690144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.690192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.690247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.690291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.690339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.690385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.690428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.690471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.690520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.690553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.690591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.690630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.690673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.690712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.690762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.690802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.690842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.690887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.690923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.690963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.691005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.691475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.691532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.691577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.932 [2024-07-25 13:34:59.691621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.691667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.691718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.691765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.691811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.691856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.691911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.691957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.692004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.692048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.692095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.692140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.692186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.692234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.692280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.692324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.692377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.692423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.692470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.692515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.692561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.692605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.692648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.692694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.692744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.692793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.692847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.692894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.692940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.692987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.693051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.693098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.693146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.693191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.693240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.693289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.693339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.693385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.693426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.693474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.693523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.693566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.693613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.693655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.693695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.693743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.693790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.693823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.693864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.693902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.693953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.693991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.694040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.694079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.694121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.694167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.694209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.694249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.694287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.694328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.694370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.694832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.694882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.694920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.694963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.695001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.695042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.695087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.695129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.695166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.695206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.695243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.695280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.695323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.695364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.695405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.695442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.695483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.695530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.695570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.695613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.695663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.695707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.695757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.695800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.695848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.695896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.695943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.695988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.696037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.696086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.696137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.696187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.696239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.696286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.696332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.696383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.933 [2024-07-25 13:34:59.696416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.696460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.696503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.696542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.696584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.696627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.696666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.696706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.696743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.696782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.696824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.696865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.696907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.696948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.696990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.697028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.697071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.697111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.697158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.697197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.697240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.697281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.697325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.697366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.697409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.697447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.697483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.697962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.698011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.698055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.698101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.698147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.698192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.698240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.698285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.698327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.698375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.698422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.698469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.698519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.698570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.698614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.698660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.698710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.698760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.698805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.698850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.698894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.698940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.698983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.699033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.699079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.699124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.699168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.699215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.699260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.699303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.699355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.699408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.699456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.699507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.699557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.699603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.699651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.699695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.699739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.699783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.699817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.699858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.699897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.699938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.699983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.700024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.700065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.700102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.700146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.700191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.700231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.700266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.700308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.700349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.700387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.700428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.700465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.700505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.934 [2024-07-25 13:34:59.700545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.700586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.700633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.700671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.700705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.700750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.700924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.701276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.701324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.701369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.701417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.701461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.701510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.701557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.701613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.701657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.701704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.701756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.701801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.701846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.701893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.701938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.701982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.702041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.702087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.702135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.702180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.702226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.702274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.702321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.702368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.702407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.702443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.702485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.702522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.702562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.702602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.702649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.702688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.702744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.702781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.702817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.702855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.702896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.702947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.702986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.703025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.703064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.703110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.703148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.703188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.703229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.703270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.703311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.703353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.703399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.703438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.703477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.703517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.703558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.703600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.703638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.703681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.703736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.703787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.703832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.703877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.703925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.703974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.704018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.704486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.704536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.704583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.704636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.704684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.704733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.704782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.704827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.704872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.704916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.704963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.705007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.705056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.705100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.705142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.705188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.705243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.705290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.705335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.705382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.705431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.705475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.705520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.705568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.705612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.705659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.705701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.935 [2024-07-25 13:34:59.705749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.705789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.705825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.705865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.705908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.705944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.705991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.706030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.706077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.706123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.706162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.706207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.706247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.706292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.706327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.706366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.706409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.706457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.706498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.706540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.706587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.706630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.706670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.706709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.706751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.706793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.706834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.706881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.706923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.706957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.706997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.707037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.707080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.707120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.707165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.707205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.707244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.707739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.707789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.707835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.707879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.707925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.707973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.708018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.708064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.708111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.708159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.708206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.708253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.708301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.708356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.708410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.708460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.708505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.708539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.708579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.708623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.708663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.708717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.708756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.708805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.708845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.708886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.708935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.708968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.709009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.709049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.709086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.709125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.709163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.709202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.709243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.709284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.709330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.709376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.709418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.709459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.709499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.709543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.709586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.709626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.709667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.709707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.709763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.709809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.709852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.709897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.709938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.709982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.710031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.710084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.710129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.710176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.710222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.710268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.710316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.710360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.710407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.936 [2024-07-25 13:34:59.710453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.710499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.710545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.711002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.711053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.711103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.711151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.711194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.711242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.711290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.711338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.711384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.711432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.711478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.711524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.711569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.711607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.711645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.711688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.711739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.711779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.711825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.711868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.711908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.711959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.711999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.712039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.712079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.712117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.712157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.712198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.712238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.712279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.712321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.712372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.712414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.712454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.712499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.712537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.712575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.712613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.712655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.712694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.712738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.712782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.712823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.712863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.712906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.712947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.712991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.713028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.713066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.713105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.713143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.713190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.713237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.713284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.713331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.713376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.713423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.713468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.713513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.713562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.713616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.713663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.713708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.713761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.714230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.714287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.714336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.714382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.714432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.714476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.714511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.714550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.714595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.714636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.714676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.714725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.714767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.714807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.714851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.714891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.714925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.714967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.715006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.715045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.715088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.715128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.715167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.715206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.715247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.715288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.715328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.715373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.715410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.715450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.937 [2024-07-25 13:34:59.715489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.715528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.715567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.715608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.715651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.715691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.715745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.715793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.715836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.715882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.715927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.715976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.716026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.716075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.716125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.716179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.716231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.716278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.716327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.716375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.716423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.716469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.716514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.716559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.716604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.716649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.716694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.716743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.716789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.716836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.716891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.716946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.716992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.717456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.717506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.717552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.717598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.717642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.717683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.717767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.717810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.717850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.717890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.717938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.717977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.718024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.718064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.718106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.718153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.718194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.718239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.718275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.718320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.718372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.718415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.718456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.718497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.718543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.718581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.718618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.718656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.718698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.718742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.718788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.718830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.718868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.718906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.718949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.718993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.719031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.719070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.719112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.719159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.719207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.719250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.719298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.719349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.719395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.719442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.719486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.719532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.719577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.719625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.719670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.719720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.719769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.719812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.719854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.719893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.719932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.719978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.720018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.720066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.720099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.720140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.720178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.720219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.720387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.938 [2024-07-25 13:34:59.721155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.721205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.721252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.721295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.721340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.721385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.721431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.721481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.721529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.721576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.721622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.721670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.721719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.721765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.721814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.721865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.721924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.721968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.722014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.722063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.722107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.722155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.722210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.722255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.722305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.722355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.722404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.722453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.722504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.722552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.722599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.722647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.722688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.722734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.722777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.722817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.722848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.722890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.722930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.722976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.723015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.723064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.723105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.723154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.723207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.723248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.723292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.723326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.723367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.723412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.723450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.723490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.723536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.723580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.723618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.723662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.723704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.723748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.723789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.723829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.723867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.723909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.723954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.723997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.724178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.724228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.724280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.724331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.724377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.724423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.724469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.724515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.724566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.724614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.724664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.724711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.724765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.724811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.724856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.724902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.724950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.724997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.725048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.725101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.939 [2024-07-25 13:34:59.725146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.725192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.725239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.725286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.725331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.725379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.725421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.725467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.725501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.725541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.725585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.725638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.725677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.725727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.725768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.725807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.725850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.725889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.725929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.725967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.726005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.726045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.726084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.726134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.726183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.726231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.726273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.726312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.726361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.726403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.726437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.726474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.726517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.726556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.726596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.726635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.726676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.726725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.726767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.726808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.726845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.726884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.726928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:02.940 [2024-07-25 13:34:59.727395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.727445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.727501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.727549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.727596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.727643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.727689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.727742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.727790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.727836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.727881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.727924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.727971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.728026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.728080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.728138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.728183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.728232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.728276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.728324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.728371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.728417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.728464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.728508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.728552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.728584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.728622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.728666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.728707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.728757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.728794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.728832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.728874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.728915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.728956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.729004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.729039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.729082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.729120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.729161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.729207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.729249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.729294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.729333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.729373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.729412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.729455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.729492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.729533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.729574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.729616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.729659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.729701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.729744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.940 [2024-07-25 13:34:59.729786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.729823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.729862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.729903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.729943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.729981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.730017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.730067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.730125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.730172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.730364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.730694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.730752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.730811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.730859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.730908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.730960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.731011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.731057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.731102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.731149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.731194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.731238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.731285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.731329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.731373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.731423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.731479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.731538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.731583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.731614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.731652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.731696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.731737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.731786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.731827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.731871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.731912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.731950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.731993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.732034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.732074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.732113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.732151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.732190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.732229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.732274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.732316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.732360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.732398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.732439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.732483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.732515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.732554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.732593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.732640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.732682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.732727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.732768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.732812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.732852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.732892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.732933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.732979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.733026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.733071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.733116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.733161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.733205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.733252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.733300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.733346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.733392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.733437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.733912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.733971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.734016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.734063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.734107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.734154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.734203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.734248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.734291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.734336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.734382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.734431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.734482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.734525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.734565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.734611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.734653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.734690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.734733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.734775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.734821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.734861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.734900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.941 [2024-07-25 13:34:59.734946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.734987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.735027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.735068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.735109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.735149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.735194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.735232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.735271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.735313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.735356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.735396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.735436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.735476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.735514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.735560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.735603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.735644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.735684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.735728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.735769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.735814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.735856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.735902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.735949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.735997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.736042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.736091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.736139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.736188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.736234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.736278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.736326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.736372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.736420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.736465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.736512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.736556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.736602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.736644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.736689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.737150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.737196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.737236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.737273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.737314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.737356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.737399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.737443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.737488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.737527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.737569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.737611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.737651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.737692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.737741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.737788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.737835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.737890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.737946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.737996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.738055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.738102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.738145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.738188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.738238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.738285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.738317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.738357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.738398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.738444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.738486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.738526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.738566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.738608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.738648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.738685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.738728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.738767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.738804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.738846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.738897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.738947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.739005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.739049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.739097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.739141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.739186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.739232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.739280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.739326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.739376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.739421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.739471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.739516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.739560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.739606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.739652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.942 [2024-07-25 13:34:59.739699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.739750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.739797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.739849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.739897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.739954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.740426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.740473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.740519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.740560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.740608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.740657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.740705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.740755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.740800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.740838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.740879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.740920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.740958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.740999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.741032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.741072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.741112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.741157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.741198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.741241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.741284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.741326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.741374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.741415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.741451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.741491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.741530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.741574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.741615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.741656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.741701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.741745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.741786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.741820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.741860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.741901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.741947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.741986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.742029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.742067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.742107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.742147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.742189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.742235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.742279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.742325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.742373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.742417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.742469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.742521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.742568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.742614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.742664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.742711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.742762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.742807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.742855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.742900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.742946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.742994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.743041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.743090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.743140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.743187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.743638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.743689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.743734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.743777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.743818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.743850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.743892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.743928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.743977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.744016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.744056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.744104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.744146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.744187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.943 [2024-07-25 13:34:59.744228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.744267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.744308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.744349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.744385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.744424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.744465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.744503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.744543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.744591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.744632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.744670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.744703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.744748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.744790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.744836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.744883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.744926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.744971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.745019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.745073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.745120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.745166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.745214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.745262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.745309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.745356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.745404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.745452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.745500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.745547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.745591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.745640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.745689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.745739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.745786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.745830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.745874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.745918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.745963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.746010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.746054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.746099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.746143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.746190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.746231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.746270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.746318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.746362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.746820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.746867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.746906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.746950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.746991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.747029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.747069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.747111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.747153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.747190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.747234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.747283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.747329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.747376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.747428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.747478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.747529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.747575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.747621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.747670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.747719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.747769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.747816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.747861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.747908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.747954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.748004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.748050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.748098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.748146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.748192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.748238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.748283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.748329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.748376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.748422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.748466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.748510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.748555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.748602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.748649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.748697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.748748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.748800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.748845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.748891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.748940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.748984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.944 [2024-07-25 13:34:59.749028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.749071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.749117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.749149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.749190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.749233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.749279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.749319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.749361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.749401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.749441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.749481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.749519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.749565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.749606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.749644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.750094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.750139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.750187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.750223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.750261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.750305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.750345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.750390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.750434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.750472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.750515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.750553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.750591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.750632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.750671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.750706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.750757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.750805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.750849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.750890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.750935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.750980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.751025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.751072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.751125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.751179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.751227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.751274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.751322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.751368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.751417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.751464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.751509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.751558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.751603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.751647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.751691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.751742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.751789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.751838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.751890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.751933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.751979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.752013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.752056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.752093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.752134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.752175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.752217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.752265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.752306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.752348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.752386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.752420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.752462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.752500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.752544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.752586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.752629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.752672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.752721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.752768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.752800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.753338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.753380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.753416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.753460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.753503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.753546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.753590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.753636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.753683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.753734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.753781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.753827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.753873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.753918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.753960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.754006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.754050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.754090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.754136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.754183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.754234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.945 [2024-07-25 13:34:59.754280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.754326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.754372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.754417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.754464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.754507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.754555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.754599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.754646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.754692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.754742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.754792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.754834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.754878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.754927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.754970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.755017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.755067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.755120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.755165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.755211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.755255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.755294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.755341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.755373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.755411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.755451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.755490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.755530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.755568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.755608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.755647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.755689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.755740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.755780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.755818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.755851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.755893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.755936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.755975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.756016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.756058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.756099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.756270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.756637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.756681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.756724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.756764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.756805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.756847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.756885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.756925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.756964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.757014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.757058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.757104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.757152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.757201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.757254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.757299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.757359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.757409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.757462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.757508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.757557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.757602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.757651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.757694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.757742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.757789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.757844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.757892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.757935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.757980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.758032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.758079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.758125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.758175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.758222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.758266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.758309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.758346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.758385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.758427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.758464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.758507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.758557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.758595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.758643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.758682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.758730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.758763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.758803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.758841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.758891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.758934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.758976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.759016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.759056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.759092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.759126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.759166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.759207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.759249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.759292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.759335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.946 [2024-07-25 13:34:59.759376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.947 [2024-07-25 13:34:59.759905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.947 [2024-07-25 13:34:59.759957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.947 [2024-07-25 13:34:59.760010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.947 [2024-07-25 13:34:59.760060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.947 [2024-07-25 13:34:59.760106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.947 [2024-07-25 13:34:59.760152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.947 [2024-07-25 13:34:59.760200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.947 [2024-07-25 13:34:59.760244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.947 [2024-07-25 13:34:59.760292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.947 [2024-07-25 13:34:59.760335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.947 [2024-07-25 13:34:59.760382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.947 [2024-07-25 13:34:59.760428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.947 [2024-07-25 13:34:59.760474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.947 [2024-07-25 13:34:59.760522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.947 [2024-07-25 13:34:59.760569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.947 [2024-07-25 13:34:59.760617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.947 [2024-07-25 13:34:59.760673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.947 [2024-07-25 13:34:59.760724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.947 [2024-07-25 13:34:59.760770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.947 [2024-07-25 13:34:59.760817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.760861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.760911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.760958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.761004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.761058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.761100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.761145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.761191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.761235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.761279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.761326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.761370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.761418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.761469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.761516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.761559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.761601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.761634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.761673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.761718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.761757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.761799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.761838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.761877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.761917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.761958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.761995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.762040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.212 [2024-07-25 13:34:59.762084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.762116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.762156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.762197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.762239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.762278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.762316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.762355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.762391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.762431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.762474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.762515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.762547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.762589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.762632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.762671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.213 13:34:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.213 [2024-07-25 13:34:59.963524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.963592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.963640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.963687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.963735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.963780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.963828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.963869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.963914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.963961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.963999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.964030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.964072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.964112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.964151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.964188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.964242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.964282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.964324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.964365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.964407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.964447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.964478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.964516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.964552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.964589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.964628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.964666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.964706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.964750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.964788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.964821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.964861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.964901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.964940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.964977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.965017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.965057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.965096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.965135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.965175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.965210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.965250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.965294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.965338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.965384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.965426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.965471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.965517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.965570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.965615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.965655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.965701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.965751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.965794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.965837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.965880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.965923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.965976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.966018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.966061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.966103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.966147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.966194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.213 [2024-07-25 13:34:59.966368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.966412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.966456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.966499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.966544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.966595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.966642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.966688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.966738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.966788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.966844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.966889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.966934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.966979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.967020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.967059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.967096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.967134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.967173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.967203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.967240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.967281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.967322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.967360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.967400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.967440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.967486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.967526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.967564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.967611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.967651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.967686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.967727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.967763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.967803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.967849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.967890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.967926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.967969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.968006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.968049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.968079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.968123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.968164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.968203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.968244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.968283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.968326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.968370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.968410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.968451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.968488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.968526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.968571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.968616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.968656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.968698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.968746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.968795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.968837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.968880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.968926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.968971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.969464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.969506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.969545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.969584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.969630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.969668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.969707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.969752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.969789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.969833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.969869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.969908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.969943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.969981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.970019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.970055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.970092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.970129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.970168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.970209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.970251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.970292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.970335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.970379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.970420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.970459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.970497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.970535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.970565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.970608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.970648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.970689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.970731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.214 [2024-07-25 13:34:59.970780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.970825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.970873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.970918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.970962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.971006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.971055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.971102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.971146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.971188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.971235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.971285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.971328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.971370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.971412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.971466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.971508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.971549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.971593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.971644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.971686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.971729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.971777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.971821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.971861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.971907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.971956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.972010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.972053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.972092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.972144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.972617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.972663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.972707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.972747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.972790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.972827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.972876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.972916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.972953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.973004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.973048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.973093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.973132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.973168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.973203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.973235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.973271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.973310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.973351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.973387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.973426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.973467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.973506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.973543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.973581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.973616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.973656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.973693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.973736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.973783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.973827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.973868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.973920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.973962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.974005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.974052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.974098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.974139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.974179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.974222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.974271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.974317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.974365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.974406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.974449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.974494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.974544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.974586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.974629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.974671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.974718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.974763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.974816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.974863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.974908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.974952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.215 [2024-07-25 13:34:59.974997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.975041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.975093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.975136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.975179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.975229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.975268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.975735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.975783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.975824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.975869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.975928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.975968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.975999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.976040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.976077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.976121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.976162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.976200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.976244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.976281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.976320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.976366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.976402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.976436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.976472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.976507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.976544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.976583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.976622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.976660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.976700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.976742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.976781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.976817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.976857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.976897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.976935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.976976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.977016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.977060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.977103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.977145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.977192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.977237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.977282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.977327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.977373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.977415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.977458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.977501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.977542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.977588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.977631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.977672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.977718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.977760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.977801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.977841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.977889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.977936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.977980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.978021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.978067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.978111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.978155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.978197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.978237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.978282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.978325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.978370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 [2024-07-25 13:34:59.978818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.216 13:34:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:03.216 13:34:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:03.475 true 00:07:03.475 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:07:03.475 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.410 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.410 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:04.410 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:04.669 true 00:07:04.669 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:07:04.669 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.927 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.928 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:04.928 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:05.186 true 00:07:05.186 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:07:05.186 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.564 13:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.564 13:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:06.564 13:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:06.564 true 00:07:06.564 13:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:07:06.564 13:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.501 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.501 13:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.501 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.760 13:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:07.760 13:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:07.760 true 00:07:07.760 13:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:07:07.760 13:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.019 13:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.278 13:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:08.278 13:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:08.278 true 00:07:08.278 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:07:08.278 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.536 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.793 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:08.793 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:08.793 true 00:07:08.793 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:07:08.793 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.051 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.310 13:35:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:09.310 13:35:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:09.310 true 00:07:09.569 13:35:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:07:09.569 13:35:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.506 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.765 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:10.765 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:11.023 true 00:07:11.023 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:07:11.023 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.958 13:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.958 13:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:11.958 13:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:12.217 true 00:07:12.217 13:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:07:12.217 13:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.476 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.476 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:12.476 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:12.735 true 00:07:12.735 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:07:12.735 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.113 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.113 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:14.113 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:14.371 true 00:07:14.371 13:35:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:07:14.371 13:35:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.305 13:35:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.305 13:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:15.305 13:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:15.564 true 00:07:15.564 13:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:07:15.564 13:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.564 13:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.822 13:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:15.822 13:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:16.081 Initializing NVMe Controllers 00:07:16.081 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:16.081 Controller IO queue size 128, less than required. 00:07:16.081 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:16.081 Controller IO queue size 128, less than required. 00:07:16.081 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:16.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:16.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:16.081 Initialization complete. Launching workers. 00:07:16.081 ======================================================== 00:07:16.081 Latency(us) 00:07:16.081 Device Information : IOPS MiB/s Average min max 00:07:16.081 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2136.27 1.04 35877.95 1500.29 1085787.90 00:07:16.081 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15946.67 7.79 8027.44 1885.51 360033.72 00:07:16.081 ======================================================== 00:07:16.081 Total : 18082.94 8.83 11317.63 1500.29 1085787.90 00:07:16.081 00:07:16.081 true 00:07:16.081 13:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99966 00:07:16.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (99966) - No such process 00:07:16.081 13:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 99966 00:07:16.081 13:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.340 13:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:16.340 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:16.340 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:16.340 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:16.340 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:16.340 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:16.599 null0 00:07:16.599 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:16.599 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:16.599 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:16.858 null1 00:07:16.858 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:16.858 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:16.858 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:16.858 null2 00:07:16.858 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:16.858 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:16.858 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:17.117 null3 00:07:17.117 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:17.117 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.117 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:17.376 null4 00:07:17.376 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:17.376 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.376 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:17.376 null5 00:07:17.376 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:17.376 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.376 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:17.635 null6 00:07:17.635 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:17.635 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.635 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:17.894 null7 00:07:17.894 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:17.894 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.894 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:17.894 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.894 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:17.894 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:17.894 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 105616 105618 105619 105621 105623 105625 105627 105629 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.895 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:18.155 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:18.155 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:18.155 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.155 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:18.155 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:18.155 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:18.155 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:18.155 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:18.155 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.155 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.155 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.155 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.155 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:18.155 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:18.155 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.155 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.155 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:18.155 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.155 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.155 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.155 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.155 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.155 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.155 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:18.155 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:18.155 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:18.155 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.155 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.155 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:18.155 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.155 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.155 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:18.413 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:18.413 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:18.413 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.413 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:18.413 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:18.413 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:18.413 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:18.413 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:18.672 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.672 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.672 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:18.672 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.672 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.672 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:18.672 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.672 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.672 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:18.672 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.672 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.672 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:18.672 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.672 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.672 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:18.672 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.672 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.672 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:18.672 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.672 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.672 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:18.672 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.672 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.672 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:18.672 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:18.672 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:18.941 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:18.941 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:18.941 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:18.941 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.941 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:18.941 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:18.941 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.941 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.941 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:18.941 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.941 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.941 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:18.942 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.942 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.942 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.942 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.942 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:18.942 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:18.942 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.942 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.942 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.942 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.942 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:18.942 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:18.942 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.942 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.942 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:18.942 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.942 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.942 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:19.243 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:19.243 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:19.243 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:19.243 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:19.243 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:19.243 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:19.243 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.243 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:19.243 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.243 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.243 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:19.243 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.243 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.243 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:19.243 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.243 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.243 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.243 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.243 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:19.243 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:19.243 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.243 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.243 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:19.243 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.243 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.501 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:19.501 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.501 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.501 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:19.501 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.501 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.501 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:19.501 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:19.501 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:19.501 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:19.501 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:19.501 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:19.501 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:19.501 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.501 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:19.759 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.759 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.759 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:19.759 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.759 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.759 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:19.759 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.759 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.759 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:19.759 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.759 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.759 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:19.759 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.759 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.759 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:19.759 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.759 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.759 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:19.759 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.759 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.759 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:19.759 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.759 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.759 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.017 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.018 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:20.018 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.018 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.018 13:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:20.276 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:20.276 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:20.276 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.276 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:20.276 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:20.276 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.276 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:20.276 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:20.534 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.534 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.534 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:20.534 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.534 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.534 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:20.534 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.534 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.534 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:20.534 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.534 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.534 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:20.534 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.534 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.534 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:20.534 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.534 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.534 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:20.534 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.534 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.534 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:20.534 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.534 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.534 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:20.534 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:20.792 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:20.792 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.792 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.793 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:21.051 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.051 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.051 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.051 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:21.051 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.051 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:21.051 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:21.051 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:21.310 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.310 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.310 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:21.310 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.310 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.310 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:21.310 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.310 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.310 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:21.310 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.310 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.310 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:21.310 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.310 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.310 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:21.310 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.310 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.311 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:21.311 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.311 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.311 13:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:21.311 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.311 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.311 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:21.311 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.311 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.311 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.311 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.311 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:21.311 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:21.311 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:21.311 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:21.569 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.569 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.569 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.569 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:21.570 rmmod nvme_tcp 00:07:21.570 rmmod nvme_fabrics 00:07:21.570 rmmod nvme_keyring 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 99574 ']' 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 99574 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 99574 ']' 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 99574 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:21.570 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99574 00:07:21.828 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:21.828 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:21.828 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99574' 00:07:21.828 killing process with pid 99574 00:07:21.828 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 99574 00:07:21.828 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 99574 00:07:21.828 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:21.828 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:21.828 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:21.828 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:21.828 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:21.828 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.828 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:21.828 13:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:24.359 00:07:24.359 real 0m47.763s 00:07:24.359 user 3m4.789s 00:07:24.359 sys 0m20.894s 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:24.359 ************************************ 00:07:24.359 END TEST nvmf_ns_hotplug_stress 00:07:24.359 ************************************ 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:24.359 ************************************ 00:07:24.359 START TEST nvmf_delete_subsystem 00:07:24.359 ************************************ 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:24.359 * Looking for test storage... 00:07:24.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:24.359 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.360 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.360 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:24.360 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:24.360 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:24.360 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:24.360 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:24.360 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:24.360 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:24.360 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:24.360 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:24.360 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.360 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:24.360 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.360 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:24.360 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:24.360 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:24.360 13:35:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.923 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:30.923 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:30.923 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:30.923 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:30.923 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:30.923 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:30.923 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:30.923 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:30.923 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:30.924 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:30.924 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:30.924 Found net devices under 0000:af:00.0: cvl_0_0 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:30.924 Found net devices under 0000:af:00.1: cvl_0_1 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:30.924 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:31.183 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:31.183 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:31.183 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:31.184 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:31.184 13:35:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:31.184 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:31.184 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:31.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:07:31.184 00:07:31.184 --- 10.0.0.2 ping statistics --- 00:07:31.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.184 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:07:31.184 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:31.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:07:31.184 00:07:31.184 --- 10.0.0.1 ping statistics --- 00:07:31.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.184 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:07:31.184 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.184 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:07:31.184 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:31.184 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.184 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:31.184 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:31.184 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.184 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:31.184 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:31.443 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:31.443 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:31.443 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:31.443 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:31.443 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=110254 00:07:31.443 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:31.443 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 110254 00:07:31.443 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 110254 ']' 00:07:31.443 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.443 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:31.443 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.443 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:31.443 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:31.443 [2024-07-25 13:35:28.143192] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:07:31.443 [2024-07-25 13:35:28.143242] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.443 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.443 [2024-07-25 13:35:28.183217] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:31.443 [2024-07-25 13:35:28.217833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:31.443 [2024-07-25 13:35:28.257016] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.443 [2024-07-25 13:35:28.257057] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.443 [2024-07-25 13:35:28.257066] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:31.443 [2024-07-25 13:35:28.257075] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:31.443 [2024-07-25 13:35:28.257082] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.443 [2024-07-25 13:35:28.257126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.443 [2024-07-25 13:35:28.257128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.380 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.381 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:32.381 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:32.381 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:32.381 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.381 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.381 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:32.381 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.381 13:35:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.381 [2024-07-25 13:35:29.002878] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.381 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.381 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:32.381 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.381 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.381 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.381 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:32.381 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.381 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.381 [2024-07-25 13:35:29.027049] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.381 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.381 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:32.381 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.381 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.381 NULL1 00:07:32.381 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.381 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:32.381 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.381 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.381 Delay0 00:07:32.381 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.381 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.381 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.381 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.381 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.381 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=110391 00:07:32.381 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:32.381 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:32.381 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.381 [2024-07-25 13:35:29.113656] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:34.286 13:35:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.286 13:35:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.286 13:35:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 starting I/O failed: -6 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 starting I/O failed: -6 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 starting I/O failed: -6 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 starting I/O failed: -6 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 starting I/O failed: -6 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 starting I/O failed: -6 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 starting I/O failed: -6 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 starting I/O failed: -6 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 starting I/O failed: -6 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 starting I/O failed: -6 00:07:34.546 [2024-07-25 13:35:31.248152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc3c4000c00 is same with the state(5) to be set 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 starting I/O failed: -6 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 starting I/O failed: -6 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 starting I/O failed: -6 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 starting I/O failed: -6 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 starting I/O failed: -6 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 starting I/O failed: -6 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 starting I/O failed: -6 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 Write completed with error (sct=0, sc=8) 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.546 starting I/O failed: -6 00:07:34.546 Read completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Write completed with error (sct=0, sc=8) 00:07:34.547 Write completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Write completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 starting I/O failed: -6 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Write completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Write completed with error (sct=0, sc=8) 00:07:34.547 starting I/O failed: -6 00:07:34.547 Write completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Write completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Write completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Write completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 starting I/O failed: -6 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Write completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Write completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Write completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Write completed with error (sct=0, sc=8) 00:07:34.547 Write completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Write completed with error (sct=0, sc=8) 00:07:34.547 starting I/O failed: -6 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Write completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Write completed with error (sct=0, sc=8) 00:07:34.547 Write completed with error (sct=0, sc=8) 00:07:34.547 Write completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Write completed with error (sct=0, sc=8) 00:07:34.547 Write completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 starting I/O failed: -6 00:07:34.547 Write completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 [2024-07-25 13:35:31.248767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5021a0 is same with the state(5) to be set 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Write completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:34.547 Read completed with error (sct=0, sc=8) 00:07:35.483 [2024-07-25 13:35:32.210414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x507280 is same with the state(5) to be set 00:07:35.483 Write completed with error (sct=0, sc=8) 00:07:35.483 Read completed with error (sct=0, sc=8) 00:07:35.483 Read completed with error (sct=0, sc=8) 00:07:35.483 Read completed with error (sct=0, sc=8) 00:07:35.483 Read completed with error (sct=0, sc=8) 00:07:35.483 Write completed with error (sct=0, sc=8) 00:07:35.483 Read completed with error (sct=0, sc=8) 00:07:35.483 Read completed with error (sct=0, sc=8) 00:07:35.483 Read completed with error (sct=0, sc=8) 00:07:35.483 Read completed with error (sct=0, sc=8) 00:07:35.483 Write completed with error (sct=0, sc=8) 00:07:35.483 Read completed with error (sct=0, sc=8) 00:07:35.483 Read completed with error (sct=0, sc=8) 00:07:35.483 Write completed with error (sct=0, sc=8) 00:07:35.483 Write completed with error (sct=0, sc=8) 00:07:35.483 Write completed with error (sct=0, sc=8) 00:07:35.483 Read completed with error (sct=0, sc=8) 00:07:35.483 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 [2024-07-25 13:35:32.251029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc3c400d330 is same with the state(5) to be set 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 [2024-07-25 13:35:32.251204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x51eda0 is same with the state(5) to be set 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 [2024-07-25 13:35:32.251503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x51ef80 is same with the state(5) to be set 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Write completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 Read completed with error (sct=0, sc=8) 00:07:35.484 [2024-07-25 13:35:32.251660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x502380 is same with the state(5) to be set 00:07:35.484 Initializing NVMe Controllers 00:07:35.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:35.484 Controller IO queue size 128, less than required. 00:07:35.484 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:35.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:35.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:35.484 Initialization complete. Launching workers. 00:07:35.484 ======================================================== 00:07:35.484 Latency(us) 00:07:35.484 Device Information : IOPS MiB/s Average min max 00:07:35.484 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 183.71 0.09 986018.67 885.91 2002172.32 00:07:35.484 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.42 0.08 893160.57 366.84 2001663.06 00:07:35.484 ======================================================== 00:07:35.484 Total : 338.13 0.17 943612.10 366.84 2002172.32 00:07:35.484 00:07:35.484 [2024-07-25 13:35:32.252292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x507280 (9): Bad file descriptor 00:07:35.484 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:35.484 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.484 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:35.484 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 110391 00:07:35.484 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 110391 00:07:36.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (110391) - No such process 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 110391 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 110391 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 110391 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.053 [2024-07-25 13:35:32.781678] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=111067 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 111067 00:07:36.053 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:36.053 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.053 [2024-07-25 13:35:32.851113] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:36.619 13:35:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:36.619 13:35:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 111067 00:07:36.619 13:35:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:37.188 13:35:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:37.188 13:35:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 111067 00:07:37.188 13:35:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:37.446 13:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:37.447 13:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 111067 00:07:37.447 13:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:38.025 13:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:38.025 13:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 111067 00:07:38.025 13:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:38.635 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:38.636 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 111067 00:07:38.636 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:39.208 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:39.208 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 111067 00:07:39.208 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:39.208 Initializing NVMe Controllers 00:07:39.208 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:39.208 Controller IO queue size 128, less than required. 00:07:39.208 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:39.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:39.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:39.208 Initialization complete. Launching workers. 00:07:39.208 ======================================================== 00:07:39.208 Latency(us) 00:07:39.208 Device Information : IOPS MiB/s Average min max 00:07:39.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003686.40 1000236.93 1009698.01 00:07:39.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004955.47 1000280.35 1012376.76 00:07:39.208 ======================================================== 00:07:39.208 Total : 256.00 0.12 1004320.93 1000236.93 1012376.76 00:07:39.208 00:07:39.466 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:39.466 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 111067 00:07:39.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (111067) - No such process 00:07:39.466 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 111067 00:07:39.466 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:39.466 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:39.466 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:39.466 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:07:39.466 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:39.466 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:07:39.466 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:39.466 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:39.466 rmmod nvme_tcp 00:07:39.725 rmmod nvme_fabrics 00:07:39.725 rmmod nvme_keyring 00:07:39.725 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:39.725 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:07:39.725 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:07:39.725 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 110254 ']' 00:07:39.725 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 110254 00:07:39.725 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 110254 ']' 00:07:39.726 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 110254 00:07:39.726 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:39.726 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:39.726 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 110254 00:07:39.726 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:39.726 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:39.726 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 110254' 00:07:39.726 killing process with pid 110254 00:07:39.726 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 110254 00:07:39.726 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 110254 00:07:39.985 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:39.985 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:39.985 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:39.985 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:39.985 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:39.985 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.985 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:39.985 13:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.891 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:41.891 00:07:41.891 real 0m17.873s 00:07:41.891 user 0m30.028s 00:07:41.891 sys 0m7.049s 00:07:41.891 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.891 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.891 ************************************ 00:07:41.891 END TEST nvmf_delete_subsystem 00:07:41.891 ************************************ 00:07:41.891 13:35:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:41.891 13:35:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:41.891 13:35:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.891 13:35:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:42.150 ************************************ 00:07:42.150 START TEST nvmf_host_management 00:07:42.150 ************************************ 00:07:42.150 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:42.150 * Looking for test storage... 00:07:42.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.150 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:42.150 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:42.150 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.150 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.150 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.150 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.150 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.150 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.150 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.150 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.150 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.150 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.150 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:42.150 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:42.150 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.150 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.150 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:42.150 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.150 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:42.150 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.150 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.150 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.150 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.150 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.150 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.150 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:42.151 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.151 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:42.151 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:42.151 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:42.151 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.151 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.151 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.151 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:42.151 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:42.151 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:42.151 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:42.151 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:42.151 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:42.151 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:42.151 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.151 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:42.151 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:42.151 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:42.151 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.151 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.151 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.151 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:42.151 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:42.151 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:07:42.151 13:35:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:48.722 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:48.722 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.722 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:48.723 Found net devices under 0000:af:00.0: cvl_0_0 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:48.723 Found net devices under 0000:af:00.1: cvl_0_1 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:48.723 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:48.981 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:48.981 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:48.981 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:48.981 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.981 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.981 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.981 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:48.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:07:48.981 00:07:48.981 --- 10.0.0.2 ping statistics --- 00:07:48.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.981 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:07:48.981 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:07:48.981 00:07:48.981 --- 10.0.0.1 ping statistics --- 00:07:48.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.981 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:07:48.981 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.981 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:07:48.981 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:48.981 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.981 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:48.982 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:48.982 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.982 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:48.982 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:49.244 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:49.244 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:49.244 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:49.244 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:49.244 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:49.244 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.244 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=115354 00:07:49.244 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 115354 00:07:49.244 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:49.244 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 115354 ']' 00:07:49.244 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.244 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:49.244 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.244 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:49.244 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.244 [2024-07-25 13:35:45.944275] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:07:49.244 [2024-07-25 13:35:45.944321] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.244 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.244 [2024-07-25 13:35:45.983518] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:49.244 [2024-07-25 13:35:46.019987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:49.244 [2024-07-25 13:35:46.061202] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:49.244 [2024-07-25 13:35:46.061243] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:49.244 [2024-07-25 13:35:46.061253] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:49.244 [2024-07-25 13:35:46.061261] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:49.244 [2024-07-25 13:35:46.061268] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:49.244 [2024-07-25 13:35:46.061419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.244 [2024-07-25 13:35:46.061503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.244 [2024-07-25 13:35:46.061612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.244 [2024-07-25 13:35:46.061613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.181 [2024-07-25 13:35:46.801155] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.181 Malloc0 00:07:50.181 [2024-07-25 13:35:46.867891] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=115602 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 115602 /var/tmp/bdevperf.sock 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 115602 ']' 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:50.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:50.181 { 00:07:50.181 "params": { 00:07:50.181 "name": "Nvme$subsystem", 00:07:50.181 "trtype": "$TEST_TRANSPORT", 00:07:50.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:50.181 "adrfam": "ipv4", 00:07:50.181 "trsvcid": "$NVMF_PORT", 00:07:50.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:50.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:50.181 "hdgst": ${hdgst:-false}, 00:07:50.181 "ddgst": ${ddgst:-false} 00:07:50.181 }, 00:07:50.181 "method": "bdev_nvme_attach_controller" 00:07:50.181 } 00:07:50.181 EOF 00:07:50.181 )") 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:50.181 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:50.182 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:50.182 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:50.182 "params": { 00:07:50.182 "name": "Nvme0", 00:07:50.182 "trtype": "tcp", 00:07:50.182 "traddr": "10.0.0.2", 00:07:50.182 "adrfam": "ipv4", 00:07:50.182 "trsvcid": "4420", 00:07:50.182 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:50.182 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:50.182 "hdgst": false, 00:07:50.182 "ddgst": false 00:07:50.182 }, 00:07:50.182 "method": "bdev_nvme_attach_controller" 00:07:50.182 }' 00:07:50.182 [2024-07-25 13:35:46.972520] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:07:50.182 [2024-07-25 13:35:46.972569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115602 ] 00:07:50.182 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.182 [2024-07-25 13:35:47.008675] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:50.182 [2024-07-25 13:35:47.044335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.441 [2024-07-25 13:35:47.082423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.441 Running I/O for 10 seconds... 00:07:51.010 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.010 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:51.010 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:51.010 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.010 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.010 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.010 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:51.010 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:51.010 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:51.010 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:51.010 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:51.010 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:51.010 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:51.010 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:51.010 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:51.010 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.010 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:51.010 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.010 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.010 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:07:51.010 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:07:51.010 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:51.010 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:51.010 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:51.010 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:51.010 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.010 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.010 [2024-07-25 13:35:47.877682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.010 [2024-07-25 13:35:47.877726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.010 [2024-07-25 13:35:47.877746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.010 [2024-07-25 13:35:47.877756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.010 [2024-07-25 13:35:47.877767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.010 [2024-07-25 13:35:47.877776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.010 [2024-07-25 13:35:47.877787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.010 [2024-07-25 13:35:47.877796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.010 [2024-07-25 13:35:47.877807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.010 [2024-07-25 13:35:47.877816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.010 [2024-07-25 13:35:47.877827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.010 [2024-07-25 13:35:47.877836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.010 [2024-07-25 13:35:47.877846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.010 [2024-07-25 13:35:47.877856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.010 [2024-07-25 13:35:47.877867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.010 [2024-07-25 13:35:47.877881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.010 [2024-07-25 13:35:47.877892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.877901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.877912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.877921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.877932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.877941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.877951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.877960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.877971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.877980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.877991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.011 [2024-07-25 13:35:47.878660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.011 [2024-07-25 13:35:47.878670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.012 [2024-07-25 13:35:47.878680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.012 [2024-07-25 13:35:47.878690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.012 [2024-07-25 13:35:47.878699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.012 [2024-07-25 13:35:47.878709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.012 [2024-07-25 13:35:47.878722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.012 [2024-07-25 13:35:47.878733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.012 [2024-07-25 13:35:47.878743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.012 [2024-07-25 13:35:47.878753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.012 [2024-07-25 13:35:47.878762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.012 [2024-07-25 13:35:47.878772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.012 [2024-07-25 13:35:47.878782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.012 [2024-07-25 13:35:47.878793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.012 [2024-07-25 13:35:47.878802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.012 [2024-07-25 13:35:47.878812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.012 [2024-07-25 13:35:47.878821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.012 [2024-07-25 13:35:47.878831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.012 [2024-07-25 13:35:47.878841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.012 [2024-07-25 13:35:47.878851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.012 [2024-07-25 13:35:47.878860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.012 [2024-07-25 13:35:47.878871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.012 [2024-07-25 13:35:47.878881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.012 [2024-07-25 13:35:47.878893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.012 [2024-07-25 13:35:47.878902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.012 [2024-07-25 13:35:47.878913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.012 [2024-07-25 13:35:47.878921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.012 [2024-07-25 13:35:47.878932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.012 [2024-07-25 13:35:47.878941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.012 [2024-07-25 13:35:47.878952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.012 [2024-07-25 13:35:47.878961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.012 [2024-07-25 13:35:47.878972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.012 [2024-07-25 13:35:47.878980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.012 [2024-07-25 13:35:47.878991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.012 [2024-07-25 13:35:47.879000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.012 [2024-07-25 13:35:47.879010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2735b70 is same with the state(5) to be set 00:07:51.012 [2024-07-25 13:35:47.879065] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2735b70 was disconnected and freed. reset controller. 00:07:51.012 [2024-07-25 13:35:47.879957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:51.012 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.012 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:51.012 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.012 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.012 task offset: 122752 on job bdev=Nvme0n1 fails 00:07:51.012 00:07:51.012 Latency(us) 00:07:51.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.012 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:51.012 Job: Nvme0n1 ended in about 0.60 seconds with error 00:07:51.012 Verification LBA range: start 0x0 length 0x400 00:07:51.012 Nvme0n1 : 0.60 1481.24 92.58 105.80 0.00 39602.54 4561.31 40055.60 00:07:51.012 =================================================================================================================== 00:07:51.012 Total : 1481.24 92.58 105.80 0.00 39602.54 4561.31 40055.60 00:07:51.012 [2024-07-25 13:35:47.881507] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:51.012 [2024-07-25 13:35:47.881524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23040d0 (9): Bad file descriptor 00:07:51.012 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.012 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:51.271 [2024-07-25 13:35:47.984917] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:52.209 13:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 115602 00:07:52.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (115602) - No such process 00:07:52.209 13:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:52.209 13:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:52.209 13:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:52.209 13:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:52.209 13:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:52.209 13:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:52.209 13:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:52.209 13:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:52.209 { 00:07:52.209 "params": { 00:07:52.209 "name": "Nvme$subsystem", 00:07:52.209 "trtype": "$TEST_TRANSPORT", 00:07:52.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:52.209 "adrfam": "ipv4", 00:07:52.209 "trsvcid": "$NVMF_PORT", 00:07:52.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:52.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:52.209 "hdgst": ${hdgst:-false}, 00:07:52.209 "ddgst": ${ddgst:-false} 00:07:52.209 }, 00:07:52.209 "method": "bdev_nvme_attach_controller" 00:07:52.209 } 00:07:52.209 EOF 00:07:52.209 )") 00:07:52.209 13:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:52.209 13:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:52.209 13:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:52.209 13:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:52.209 "params": { 00:07:52.209 "name": "Nvme0", 00:07:52.209 "trtype": "tcp", 00:07:52.209 "traddr": "10.0.0.2", 00:07:52.209 "adrfam": "ipv4", 00:07:52.209 "trsvcid": "4420", 00:07:52.209 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:52.209 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:52.209 "hdgst": false, 00:07:52.209 "ddgst": false 00:07:52.209 }, 00:07:52.209 "method": "bdev_nvme_attach_controller" 00:07:52.209 }' 00:07:52.209 [2024-07-25 13:35:48.946966] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:07:52.209 [2024-07-25 13:35:48.947018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115901 ] 00:07:52.209 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.209 [2024-07-25 13:35:48.982819] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:52.209 [2024-07-25 13:35:49.018251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.209 [2024-07-25 13:35:49.053744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.468 Running I/O for 1 seconds... 00:07:53.405 00:07:53.405 Latency(us) 00:07:53.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.405 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:53.405 Verification LBA range: start 0x0 length 0x400 00:07:53.405 Nvme0n1 : 1.01 1326.38 82.90 0.00 0.00 47590.64 9699.33 40265.32 00:07:53.405 =================================================================================================================== 00:07:53.405 Total : 1326.38 82.90 0.00 0.00 47590.64 9699.33 40265.32 00:07:53.665 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:53.665 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:53.665 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:53.665 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:53.665 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:53.665 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:53.665 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:53.665 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:53.665 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:53.665 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:53.665 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:53.665 rmmod nvme_tcp 00:07:53.665 rmmod nvme_fabrics 00:07:53.665 rmmod nvme_keyring 00:07:53.665 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:53.665 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:53.665 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:53.665 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 115354 ']' 00:07:53.665 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 115354 00:07:53.665 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 115354 ']' 00:07:53.665 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 115354 00:07:53.665 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:53.665 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:53.665 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115354 00:07:53.925 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:53.925 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:53.925 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115354' 00:07:53.925 killing process with pid 115354 00:07:53.925 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 115354 00:07:53.925 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 115354 00:07:53.925 [2024-07-25 13:35:50.742023] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:53.925 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:53.925 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:53.925 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:53.925 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:53.925 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:53.925 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.925 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:53.925 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.459 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:56.459 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:56.459 00:07:56.459 real 0m14.049s 00:07:56.459 user 0m22.939s 00:07:56.459 sys 0m6.725s 00:07:56.459 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.459 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.459 ************************************ 00:07:56.459 END TEST nvmf_host_management 00:07:56.459 ************************************ 00:07:56.459 13:35:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:56.459 13:35:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:56.459 13:35:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.459 13:35:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:56.459 ************************************ 00:07:56.459 START TEST nvmf_lvol 00:07:56.459 ************************************ 00:07:56.459 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:56.459 * Looking for test storage... 00:07:56.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:07:56.460 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:03.097 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:03.098 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:03.098 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:03.098 Found net devices under 0000:af:00.0: cvl_0_0 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:03.098 Found net devices under 0000:af:00.1: cvl_0_1 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:03.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:08:03.098 00:08:03.098 --- 10.0.0.2 ping statistics --- 00:08:03.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.098 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:03.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:08:03.098 00:08:03.098 --- 10.0.0.1 ping statistics --- 00:08:03.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.098 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:03.098 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:03.099 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.099 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:03.099 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:03.099 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:03.099 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:03.099 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:03.099 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:03.099 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=119932 00:08:03.099 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:03.099 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 119932 00:08:03.099 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 119932 ']' 00:08:03.099 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.099 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:03.099 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.099 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:03.099 13:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:03.099 [2024-07-25 13:35:59.720010] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:03.099 [2024-07-25 13:35:59.720060] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.099 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.099 [2024-07-25 13:35:59.761089] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:03.099 [2024-07-25 13:35:59.796705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:03.099 [2024-07-25 13:35:59.835155] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.099 [2024-07-25 13:35:59.835198] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.099 [2024-07-25 13:35:59.835207] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.099 [2024-07-25 13:35:59.835215] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.099 [2024-07-25 13:35:59.835222] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.099 [2024-07-25 13:35:59.835268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.099 [2024-07-25 13:35:59.835367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.099 [2024-07-25 13:35:59.835369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.671 13:36:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:03.671 13:36:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:03.671 13:36:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:03.671 13:36:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:03.671 13:36:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:03.929 13:36:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.929 13:36:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:03.929 [2024-07-25 13:36:00.727213] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.929 13:36:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:04.188 13:36:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:04.188 13:36:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:04.447 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:04.447 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:04.447 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:04.706 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=950a2dca-e761-4408-9b83-68a1d8a7cf7a 00:08:04.706 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 950a2dca-e761-4408-9b83-68a1d8a7cf7a lvol 20 00:08:04.965 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3be2255c-21ee-4bca-bc85-6151fa791872 00:08:04.965 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:05.223 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3be2255c-21ee-4bca-bc85-6151fa791872 00:08:05.223 13:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:05.483 [2024-07-25 13:36:02.227764] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.483 13:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:05.742 13:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=120555 00:08:05.742 13:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:05.742 13:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:05.742 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.679 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3be2255c-21ee-4bca-bc85-6151fa791872 MY_SNAPSHOT 00:08:06.938 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=87829e48-1e4f-4da2-b7b4-251cd808672b 00:08:06.938 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3be2255c-21ee-4bca-bc85-6151fa791872 30 00:08:07.198 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 87829e48-1e4f-4da2-b7b4-251cd808672b MY_CLONE 00:08:07.198 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=5425b5ab-3e4c-4d04-94ce-e37d14aebef8 00:08:07.198 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 5425b5ab-3e4c-4d04-94ce-e37d14aebef8 00:08:07.765 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 120555 00:08:15.886 Initializing NVMe Controllers 00:08:15.886 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:15.886 Controller IO queue size 128, less than required. 00:08:15.886 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:15.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:15.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:15.886 Initialization complete. Launching workers. 00:08:15.886 ======================================================== 00:08:15.886 Latency(us) 00:08:15.886 Device Information : IOPS MiB/s Average min max 00:08:15.886 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12650.00 49.41 10120.08 1797.39 57094.30 00:08:15.886 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12435.10 48.57 10296.46 3626.65 54260.01 00:08:15.886 ======================================================== 00:08:15.886 Total : 25085.10 97.99 10207.52 1797.39 57094.30 00:08:15.886 00:08:15.886 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:16.145 13:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3be2255c-21ee-4bca-bc85-6151fa791872 00:08:16.404 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 950a2dca-e761-4408-9b83-68a1d8a7cf7a 00:08:16.404 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:16.404 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:16.404 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:16.404 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:16.404 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:16.404 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:16.404 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:16.404 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:16.404 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:16.404 rmmod nvme_tcp 00:08:16.662 rmmod nvme_fabrics 00:08:16.662 rmmod nvme_keyring 00:08:16.662 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:16.662 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:16.662 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:16.662 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 119932 ']' 00:08:16.662 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 119932 00:08:16.662 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 119932 ']' 00:08:16.662 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 119932 00:08:16.662 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:16.662 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:16.662 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 119932 00:08:16.662 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:16.662 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:16.662 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 119932' 00:08:16.662 killing process with pid 119932 00:08:16.662 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 119932 00:08:16.662 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 119932 00:08:16.921 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:16.921 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:16.921 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:16.921 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:16.921 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:16.921 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.921 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:16.921 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.826 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:18.826 00:08:18.826 real 0m22.756s 00:08:18.826 user 1m2.034s 00:08:18.826 sys 0m9.688s 00:08:18.826 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.826 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:18.826 ************************************ 00:08:18.826 END TEST nvmf_lvol 00:08:18.826 ************************************ 00:08:19.085 13:36:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:19.085 13:36:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:19.085 13:36:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.085 13:36:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:19.085 ************************************ 00:08:19.085 START TEST nvmf_lvs_grow 00:08:19.085 ************************************ 00:08:19.085 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:19.085 * Looking for test storage... 00:08:19.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.085 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.085 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:19.085 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.085 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.085 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.085 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.085 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.085 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.085 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.085 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.085 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.085 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:08:19.086 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:25.658 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.658 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:08:25.658 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:25.658 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:25.658 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:25.658 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:25.658 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:25.658 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:08:25.658 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:25.658 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:08:25.658 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:08:25.658 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:08:25.658 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:08:25.658 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:08:25.658 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:25.659 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:25.659 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:25.659 Found net devices under 0000:af:00.0: cvl_0_0 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:25.659 Found net devices under 0000:af:00.1: cvl_0_1 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.659 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.918 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.918 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:25.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:08:25.918 00:08:25.918 --- 10.0.0.2 ping statistics --- 00:08:25.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.918 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:08:25.918 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:08:25.918 00:08:25.918 --- 10.0.0.1 ping statistics --- 00:08:25.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.918 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:08:25.918 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.918 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:08:25.918 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:25.918 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.918 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:25.918 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:25.918 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.918 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:25.918 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:25.918 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:25.918 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:25.919 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:25.919 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:25.919 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=126508 00:08:25.919 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:25.919 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 126508 00:08:25.919 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 126508 ']' 00:08:25.919 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.919 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.919 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.919 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.919 13:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:25.919 [2024-07-25 13:36:22.678287] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:25.919 [2024-07-25 13:36:22.678342] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.919 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.919 [2024-07-25 13:36:22.718752] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:25.919 [2024-07-25 13:36:22.752854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.919 [2024-07-25 13:36:22.791090] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.919 [2024-07-25 13:36:22.791131] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.919 [2024-07-25 13:36:22.791140] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.919 [2024-07-25 13:36:22.791149] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.919 [2024-07-25 13:36:22.791156] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.919 [2024-07-25 13:36:22.791177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.855 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.855 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:26.855 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:26.855 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:26.855 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.855 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.855 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:26.855 [2024-07-25 13:36:23.673785] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.855 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:26.855 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:26.855 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.855 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.855 ************************************ 00:08:26.855 START TEST lvs_grow_clean 00:08:26.855 ************************************ 00:08:26.855 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:26.855 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:26.855 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:26.855 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:26.855 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:26.855 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:26.855 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:26.855 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:26.855 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:27.114 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:27.114 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:27.114 13:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:27.373 13:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=be06eb21-0b64-4b3f-9374-484e5413c496 00:08:27.373 13:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be06eb21-0b64-4b3f-9374-484e5413c496 00:08:27.373 13:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:27.631 13:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:27.631 13:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:27.631 13:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u be06eb21-0b64-4b3f-9374-484e5413c496 lvol 150 00:08:27.631 13:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ec609f7f-eb69-4656-9626-ed27c11cf83b 00:08:27.631 13:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:27.631 13:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:27.889 [2024-07-25 13:36:24.625413] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:27.889 [2024-07-25 13:36:24.625463] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:27.889 true 00:08:27.889 13:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be06eb21-0b64-4b3f-9374-484e5413c496 00:08:27.889 13:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:28.147 13:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:28.147 13:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:28.147 13:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ec609f7f-eb69-4656-9626-ed27c11cf83b 00:08:28.406 13:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:28.406 [2024-07-25 13:36:25.267353] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.406 13:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:28.664 13:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=127079 00:08:28.664 13:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:28.664 13:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:28.664 13:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 127079 /var/tmp/bdevperf.sock 00:08:28.664 13:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 127079 ']' 00:08:28.664 13:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:28.664 13:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:28.665 13:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:28.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:28.665 13:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:28.665 13:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:28.665 [2024-07-25 13:36:25.495023] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:28.665 [2024-07-25 13:36:25.495074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127079 ] 00:08:28.665 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.665 [2024-07-25 13:36:25.531917] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:28.923 [2024-07-25 13:36:25.565289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.923 [2024-07-25 13:36:25.602988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.923 13:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:28.923 13:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:28.923 13:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:29.182 Nvme0n1 00:08:29.182 13:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:29.440 [ 00:08:29.440 { 00:08:29.440 "name": "Nvme0n1", 00:08:29.440 "aliases": [ 00:08:29.441 "ec609f7f-eb69-4656-9626-ed27c11cf83b" 00:08:29.441 ], 00:08:29.441 "product_name": "NVMe disk", 00:08:29.441 "block_size": 4096, 00:08:29.441 "num_blocks": 38912, 00:08:29.441 "uuid": "ec609f7f-eb69-4656-9626-ed27c11cf83b", 00:08:29.441 "assigned_rate_limits": { 00:08:29.441 "rw_ios_per_sec": 0, 00:08:29.441 "rw_mbytes_per_sec": 0, 00:08:29.441 "r_mbytes_per_sec": 0, 00:08:29.441 "w_mbytes_per_sec": 0 00:08:29.441 }, 00:08:29.441 "claimed": false, 00:08:29.441 "zoned": false, 00:08:29.441 "supported_io_types": { 00:08:29.441 "read": true, 00:08:29.441 "write": true, 00:08:29.441 "unmap": true, 00:08:29.441 "flush": true, 00:08:29.441 "reset": true, 00:08:29.441 "nvme_admin": true, 00:08:29.441 "nvme_io": true, 00:08:29.441 "nvme_io_md": false, 00:08:29.441 "write_zeroes": true, 00:08:29.441 "zcopy": false, 00:08:29.441 "get_zone_info": false, 00:08:29.441 "zone_management": false, 00:08:29.441 "zone_append": false, 00:08:29.441 "compare": true, 00:08:29.441 "compare_and_write": true, 00:08:29.441 "abort": true, 00:08:29.441 "seek_hole": false, 00:08:29.441 "seek_data": false, 00:08:29.441 "copy": true, 00:08:29.441 "nvme_iov_md": false 00:08:29.441 }, 00:08:29.441 "memory_domains": [ 00:08:29.441 { 00:08:29.441 "dma_device_id": "system", 00:08:29.441 "dma_device_type": 1 00:08:29.441 } 00:08:29.441 ], 00:08:29.441 "driver_specific": { 00:08:29.441 "nvme": [ 00:08:29.441 { 00:08:29.441 "trid": { 00:08:29.441 "trtype": "TCP", 00:08:29.441 "adrfam": "IPv4", 00:08:29.441 "traddr": "10.0.0.2", 00:08:29.441 "trsvcid": "4420", 00:08:29.441 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:29.441 }, 00:08:29.441 "ctrlr_data": { 00:08:29.441 "cntlid": 1, 00:08:29.441 "vendor_id": "0x8086", 00:08:29.441 "model_number": "SPDK bdev Controller", 00:08:29.441 "serial_number": "SPDK0", 00:08:29.441 "firmware_revision": "24.09", 00:08:29.441 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:29.441 "oacs": { 00:08:29.441 "security": 0, 00:08:29.441 "format": 0, 00:08:29.441 "firmware": 0, 00:08:29.441 "ns_manage": 0 00:08:29.441 }, 00:08:29.441 "multi_ctrlr": true, 00:08:29.441 "ana_reporting": false 00:08:29.441 }, 00:08:29.441 "vs": { 00:08:29.441 "nvme_version": "1.3" 00:08:29.441 }, 00:08:29.441 "ns_data": { 00:08:29.441 "id": 1, 00:08:29.441 "can_share": true 00:08:29.441 } 00:08:29.441 } 00:08:29.441 ], 00:08:29.441 "mp_policy": "active_passive" 00:08:29.441 } 00:08:29.441 } 00:08:29.441 ] 00:08:29.441 13:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=127187 00:08:29.441 13:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:29.441 13:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:29.441 Running I/O for 10 seconds... 00:08:30.377 Latency(us) 00:08:30.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.377 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.377 Nvme0n1 : 1.00 24013.00 93.80 0.00 0.00 0.00 0.00 0.00 00:08:30.377 =================================================================================================================== 00:08:30.377 Total : 24013.00 93.80 0.00 0.00 0.00 0.00 0.00 00:08:30.377 00:08:31.313 13:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u be06eb21-0b64-4b3f-9374-484e5413c496 00:08:31.572 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.572 Nvme0n1 : 2.00 24134.50 94.28 0.00 0.00 0.00 0.00 0.00 00:08:31.572 =================================================================================================================== 00:08:31.572 Total : 24134.50 94.28 0.00 0.00 0.00 0.00 0.00 00:08:31.572 00:08:31.572 true 00:08:31.572 13:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be06eb21-0b64-4b3f-9374-484e5413c496 00:08:31.573 13:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:31.831 13:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:31.831 13:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:31.831 13:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 127187 00:08:32.399 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.399 Nvme0n1 : 3.00 24172.33 94.42 0.00 0.00 0.00 0.00 0.00 00:08:32.399 =================================================================================================================== 00:08:32.399 Total : 24172.33 94.42 0.00 0.00 0.00 0.00 0.00 00:08:32.399 00:08:33.776 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.776 Nvme0n1 : 4.00 24193.50 94.51 0.00 0.00 0.00 0.00 0.00 00:08:33.776 =================================================================================================================== 00:08:33.776 Total : 24193.50 94.51 0.00 0.00 0.00 0.00 0.00 00:08:33.776 00:08:34.712 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.712 Nvme0n1 : 5.00 24222.20 94.62 0.00 0.00 0.00 0.00 0.00 00:08:34.712 =================================================================================================================== 00:08:34.712 Total : 24222.20 94.62 0.00 0.00 0.00 0.00 0.00 00:08:34.712 00:08:35.647 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.647 Nvme0n1 : 6.00 24267.50 94.79 0.00 0.00 0.00 0.00 0.00 00:08:35.647 =================================================================================================================== 00:08:35.647 Total : 24267.50 94.79 0.00 0.00 0.00 0.00 0.00 00:08:35.647 00:08:36.584 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.584 Nvme0n1 : 7.00 24295.71 94.91 0.00 0.00 0.00 0.00 0.00 00:08:36.584 =================================================================================================================== 00:08:36.584 Total : 24295.71 94.91 0.00 0.00 0.00 0.00 0.00 00:08:36.584 00:08:37.521 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.521 Nvme0n1 : 8.00 24320.75 95.00 0.00 0.00 0.00 0.00 0.00 00:08:37.521 =================================================================================================================== 00:08:37.521 Total : 24320.75 95.00 0.00 0.00 0.00 0.00 0.00 00:08:37.521 00:08:38.455 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.455 Nvme0n1 : 9.00 24349.00 95.11 0.00 0.00 0.00 0.00 0.00 00:08:38.455 =================================================================================================================== 00:08:38.455 Total : 24349.00 95.11 0.00 0.00 0.00 0.00 0.00 00:08:38.455 00:08:39.389 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.389 Nvme0n1 : 10.00 24365.30 95.18 0.00 0.00 0.00 0.00 0.00 00:08:39.389 =================================================================================================================== 00:08:39.389 Total : 24365.30 95.18 0.00 0.00 0.00 0.00 0.00 00:08:39.389 00:08:39.389 00:08:39.389 Latency(us) 00:08:39.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.389 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.389 Nvme0n1 : 10.00 24367.11 95.18 0.00 0.00 5249.39 3145.73 10905.19 00:08:39.389 =================================================================================================================== 00:08:39.389 Total : 24367.11 95.18 0.00 0.00 5249.39 3145.73 10905.19 00:08:39.389 0 00:08:39.648 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 127079 00:08:39.648 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 127079 ']' 00:08:39.648 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 127079 00:08:39.648 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:39.648 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:39.648 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 127079 00:08:39.648 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:39.648 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:39.648 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 127079' 00:08:39.648 killing process with pid 127079 00:08:39.648 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 127079 00:08:39.648 Received shutdown signal, test time was about 10.000000 seconds 00:08:39.648 00:08:39.648 Latency(us) 00:08:39.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.648 =================================================================================================================== 00:08:39.648 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:39.648 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 127079 00:08:39.648 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:39.909 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:40.226 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be06eb21-0b64-4b3f-9374-484e5413c496 00:08:40.227 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:40.227 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:40.227 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:40.227 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:40.486 [2024-07-25 13:36:37.204891] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:40.486 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be06eb21-0b64-4b3f-9374-484e5413c496 00:08:40.486 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:40.486 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be06eb21-0b64-4b3f-9374-484e5413c496 00:08:40.486 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.486 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.486 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.486 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.486 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.486 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.486 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.486 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:40.486 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be06eb21-0b64-4b3f-9374-484e5413c496 00:08:40.746 request: 00:08:40.746 { 00:08:40.746 "uuid": "be06eb21-0b64-4b3f-9374-484e5413c496", 00:08:40.746 "method": "bdev_lvol_get_lvstores", 00:08:40.746 "req_id": 1 00:08:40.746 } 00:08:40.746 Got JSON-RPC error response 00:08:40.746 response: 00:08:40.746 { 00:08:40.746 "code": -19, 00:08:40.746 "message": "No such device" 00:08:40.746 } 00:08:40.746 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:40.746 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:40.746 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:40.746 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:40.746 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:40.746 aio_bdev 00:08:40.746 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ec609f7f-eb69-4656-9626-ed27c11cf83b 00:08:40.746 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=ec609f7f-eb69-4656-9626-ed27c11cf83b 00:08:40.746 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:40.746 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:40.746 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:40.746 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:40.746 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:41.004 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ec609f7f-eb69-4656-9626-ed27c11cf83b -t 2000 00:08:41.264 [ 00:08:41.264 { 00:08:41.264 "name": "ec609f7f-eb69-4656-9626-ed27c11cf83b", 00:08:41.264 "aliases": [ 00:08:41.264 "lvs/lvol" 00:08:41.264 ], 00:08:41.264 "product_name": "Logical Volume", 00:08:41.264 "block_size": 4096, 00:08:41.264 "num_blocks": 38912, 00:08:41.264 "uuid": "ec609f7f-eb69-4656-9626-ed27c11cf83b", 00:08:41.264 "assigned_rate_limits": { 00:08:41.264 "rw_ios_per_sec": 0, 00:08:41.264 "rw_mbytes_per_sec": 0, 00:08:41.264 "r_mbytes_per_sec": 0, 00:08:41.264 "w_mbytes_per_sec": 0 00:08:41.264 }, 00:08:41.264 "claimed": false, 00:08:41.264 "zoned": false, 00:08:41.264 "supported_io_types": { 00:08:41.264 "read": true, 00:08:41.264 "write": true, 00:08:41.264 "unmap": true, 00:08:41.264 "flush": false, 00:08:41.264 "reset": true, 00:08:41.264 "nvme_admin": false, 00:08:41.264 "nvme_io": false, 00:08:41.264 "nvme_io_md": false, 00:08:41.264 "write_zeroes": true, 00:08:41.264 "zcopy": false, 00:08:41.264 "get_zone_info": false, 00:08:41.264 "zone_management": false, 00:08:41.264 "zone_append": false, 00:08:41.264 "compare": false, 00:08:41.264 "compare_and_write": false, 00:08:41.264 "abort": false, 00:08:41.264 "seek_hole": true, 00:08:41.264 "seek_data": true, 00:08:41.264 "copy": false, 00:08:41.264 "nvme_iov_md": false 00:08:41.264 }, 00:08:41.264 "driver_specific": { 00:08:41.264 "lvol": { 00:08:41.264 "lvol_store_uuid": "be06eb21-0b64-4b3f-9374-484e5413c496", 00:08:41.264 "base_bdev": "aio_bdev", 00:08:41.264 "thin_provision": false, 00:08:41.264 "num_allocated_clusters": 38, 00:08:41.264 "snapshot": false, 00:08:41.264 "clone": false, 00:08:41.264 "esnap_clone": false 00:08:41.264 } 00:08:41.264 } 00:08:41.264 } 00:08:41.264 ] 00:08:41.264 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:41.264 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be06eb21-0b64-4b3f-9374-484e5413c496 00:08:41.264 13:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:41.264 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:41.264 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be06eb21-0b64-4b3f-9374-484e5413c496 00:08:41.264 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:41.523 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:41.523 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ec609f7f-eb69-4656-9626-ed27c11cf83b 00:08:41.782 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u be06eb21-0b64-4b3f-9374-484e5413c496 00:08:41.782 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:42.041 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:42.041 00:08:42.041 real 0m15.084s 00:08:42.041 user 0m14.120s 00:08:42.041 sys 0m1.964s 00:08:42.041 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.041 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:42.041 ************************************ 00:08:42.041 END TEST lvs_grow_clean 00:08:42.041 ************************************ 00:08:42.041 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:42.041 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:42.041 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.041 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:42.041 ************************************ 00:08:42.041 START TEST lvs_grow_dirty 00:08:42.041 ************************************ 00:08:42.041 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:42.041 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:42.041 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:42.041 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:42.041 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:42.041 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:42.041 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:42.041 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:42.041 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:42.041 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:42.300 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:42.300 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:42.559 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=34cc508b-87a5-4653-8822-a6fe3b9db1e4 00:08:42.559 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34cc508b-87a5-4653-8822-a6fe3b9db1e4 00:08:42.559 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:42.559 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:42.559 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:42.559 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 34cc508b-87a5-4653-8822-a6fe3b9db1e4 lvol 150 00:08:42.818 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=28e72e0b-40a8-488d-b849-4143c0582f8a 00:08:42.818 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:42.818 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:43.077 [2024-07-25 13:36:39.770308] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:43.077 [2024-07-25 13:36:39.770356] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:43.077 true 00:08:43.077 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34cc508b-87a5-4653-8822-a6fe3b9db1e4 00:08:43.077 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:43.077 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:43.077 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:43.335 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 28e72e0b-40a8-488d-b849-4143c0582f8a 00:08:43.594 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:43.594 [2024-07-25 13:36:40.448321] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:43.594 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:43.853 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=129784 00:08:43.853 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:43.853 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:43.853 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 129784 /var/tmp/bdevperf.sock 00:08:43.853 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 129784 ']' 00:08:43.853 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:43.853 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:43.853 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:43.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:43.853 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:43.853 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:43.853 [2024-07-25 13:36:40.686806] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:43.853 [2024-07-25 13:36:40.686859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129784 ] 00:08:43.853 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.853 [2024-07-25 13:36:40.722358] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:44.112 [2024-07-25 13:36:40.756856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.112 [2024-07-25 13:36:40.794655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.112 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:44.112 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:44.112 13:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:44.371 Nvme0n1 00:08:44.371 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:44.630 [ 00:08:44.630 { 00:08:44.630 "name": "Nvme0n1", 00:08:44.630 "aliases": [ 00:08:44.630 "28e72e0b-40a8-488d-b849-4143c0582f8a" 00:08:44.630 ], 00:08:44.630 "product_name": "NVMe disk", 00:08:44.630 "block_size": 4096, 00:08:44.630 "num_blocks": 38912, 00:08:44.630 "uuid": "28e72e0b-40a8-488d-b849-4143c0582f8a", 00:08:44.630 "assigned_rate_limits": { 00:08:44.630 "rw_ios_per_sec": 0, 00:08:44.630 "rw_mbytes_per_sec": 0, 00:08:44.630 "r_mbytes_per_sec": 0, 00:08:44.630 "w_mbytes_per_sec": 0 00:08:44.630 }, 00:08:44.630 "claimed": false, 00:08:44.630 "zoned": false, 00:08:44.630 "supported_io_types": { 00:08:44.630 "read": true, 00:08:44.630 "write": true, 00:08:44.630 "unmap": true, 00:08:44.630 "flush": true, 00:08:44.630 "reset": true, 00:08:44.630 "nvme_admin": true, 00:08:44.630 "nvme_io": true, 00:08:44.630 "nvme_io_md": false, 00:08:44.630 "write_zeroes": true, 00:08:44.630 "zcopy": false, 00:08:44.630 "get_zone_info": false, 00:08:44.630 "zone_management": false, 00:08:44.630 "zone_append": false, 00:08:44.630 "compare": true, 00:08:44.630 "compare_and_write": true, 00:08:44.630 "abort": true, 00:08:44.630 "seek_hole": false, 00:08:44.630 "seek_data": false, 00:08:44.630 "copy": true, 00:08:44.630 "nvme_iov_md": false 00:08:44.630 }, 00:08:44.630 "memory_domains": [ 00:08:44.630 { 00:08:44.630 "dma_device_id": "system", 00:08:44.630 "dma_device_type": 1 00:08:44.630 } 00:08:44.630 ], 00:08:44.630 "driver_specific": { 00:08:44.630 "nvme": [ 00:08:44.630 { 00:08:44.630 "trid": { 00:08:44.630 "trtype": "TCP", 00:08:44.630 "adrfam": "IPv4", 00:08:44.630 "traddr": "10.0.0.2", 00:08:44.630 "trsvcid": "4420", 00:08:44.630 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:44.630 }, 00:08:44.630 "ctrlr_data": { 00:08:44.630 "cntlid": 1, 00:08:44.630 "vendor_id": "0x8086", 00:08:44.630 "model_number": "SPDK bdev Controller", 00:08:44.630 "serial_number": "SPDK0", 00:08:44.630 "firmware_revision": "24.09", 00:08:44.630 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:44.630 "oacs": { 00:08:44.630 "security": 0, 00:08:44.630 "format": 0, 00:08:44.630 "firmware": 0, 00:08:44.630 "ns_manage": 0 00:08:44.630 }, 00:08:44.630 "multi_ctrlr": true, 00:08:44.630 "ana_reporting": false 00:08:44.630 }, 00:08:44.630 "vs": { 00:08:44.630 "nvme_version": "1.3" 00:08:44.630 }, 00:08:44.630 "ns_data": { 00:08:44.630 "id": 1, 00:08:44.630 "can_share": true 00:08:44.630 } 00:08:44.630 } 00:08:44.630 ], 00:08:44.630 "mp_policy": "active_passive" 00:08:44.630 } 00:08:44.630 } 00:08:44.630 ] 00:08:44.630 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=129798 00:08:44.630 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:44.630 13:36:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:44.630 Running I/O for 10 seconds... 00:08:45.567 Latency(us) 00:08:45.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.567 Nvme0n1 : 1.00 22978.00 89.76 0.00 0.00 0.00 0.00 0.00 00:08:45.567 =================================================================================================================== 00:08:45.567 Total : 22978.00 89.76 0.00 0.00 0.00 0.00 0.00 00:08:45.567 00:08:46.501 13:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 34cc508b-87a5-4653-8822-a6fe3b9db1e4 00:08:46.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.759 Nvme0n1 : 2.00 23177.00 90.54 0.00 0.00 0.00 0.00 0.00 00:08:46.759 =================================================================================================================== 00:08:46.759 Total : 23177.00 90.54 0.00 0.00 0.00 0.00 0.00 00:08:46.759 00:08:46.759 true 00:08:46.759 13:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34cc508b-87a5-4653-8822-a6fe3b9db1e4 00:08:46.759 13:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:47.017 13:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:47.017 13:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:47.017 13:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 129798 00:08:47.586 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.586 Nvme0n1 : 3.00 23275.33 90.92 0.00 0.00 0.00 0.00 0.00 00:08:47.586 =================================================================================================================== 00:08:47.586 Total : 23275.33 90.92 0.00 0.00 0.00 0.00 0.00 00:08:47.586 00:08:48.963 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.963 Nvme0n1 : 4.00 23352.50 91.22 0.00 0.00 0.00 0.00 0.00 00:08:48.963 =================================================================================================================== 00:08:48.963 Total : 23352.50 91.22 0.00 0.00 0.00 0.00 0.00 00:08:48.963 00:08:49.899 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.899 Nvme0n1 : 5.00 23408.40 91.44 0.00 0.00 0.00 0.00 0.00 00:08:49.899 =================================================================================================================== 00:08:49.899 Total : 23408.40 91.44 0.00 0.00 0.00 0.00 0.00 00:08:49.899 00:08:50.835 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.835 Nvme0n1 : 6.00 23457.67 91.63 0.00 0.00 0.00 0.00 0.00 00:08:50.835 =================================================================================================================== 00:08:50.835 Total : 23457.67 91.63 0.00 0.00 0.00 0.00 0.00 00:08:50.835 00:08:51.770 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.770 Nvme0n1 : 7.00 23492.86 91.77 0.00 0.00 0.00 0.00 0.00 00:08:51.770 =================================================================================================================== 00:08:51.770 Total : 23492.86 91.77 0.00 0.00 0.00 0.00 0.00 00:08:51.770 00:08:52.707 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.707 Nvme0n1 : 8.00 23519.25 91.87 0.00 0.00 0.00 0.00 0.00 00:08:52.707 =================================================================================================================== 00:08:52.707 Total : 23519.25 91.87 0.00 0.00 0.00 0.00 0.00 00:08:52.707 00:08:53.643 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.643 Nvme0n1 : 9.00 23538.89 91.95 0.00 0.00 0.00 0.00 0.00 00:08:53.643 =================================================================================================================== 00:08:53.643 Total : 23538.89 91.95 0.00 0.00 0.00 0.00 0.00 00:08:53.643 00:08:54.614 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.615 Nvme0n1 : 10.00 23550.60 91.99 0.00 0.00 0.00 0.00 0.00 00:08:54.615 =================================================================================================================== 00:08:54.615 Total : 23550.60 91.99 0.00 0.00 0.00 0.00 0.00 00:08:54.615 00:08:54.615 00:08:54.615 Latency(us) 00:08:54.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.615 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.615 Nvme0n1 : 10.01 23550.33 91.99 0.00 0.00 5431.31 4194.30 16777.22 00:08:54.615 =================================================================================================================== 00:08:54.615 Total : 23550.33 91.99 0.00 0.00 5431.31 4194.30 16777.22 00:08:54.615 0 00:08:54.615 13:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 129784 00:08:54.615 13:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 129784 ']' 00:08:54.615 13:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 129784 00:08:54.615 13:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:54.615 13:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:54.615 13:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 129784 00:08:54.874 13:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:54.874 13:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:54.874 13:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 129784' 00:08:54.874 killing process with pid 129784 00:08:54.874 13:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 129784 00:08:54.874 Received shutdown signal, test time was about 10.000000 seconds 00:08:54.874 00:08:54.874 Latency(us) 00:08:54.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.874 =================================================================================================================== 00:08:54.874 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:54.874 13:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 129784 00:08:54.874 13:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:55.133 13:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:55.392 13:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:55.392 13:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34cc508b-87a5-4653-8822-a6fe3b9db1e4 00:08:55.392 13:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:55.392 13:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:55.392 13:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 126508 00:08:55.392 13:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 126508 00:08:55.392 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 126508 Killed "${NVMF_APP[@]}" "$@" 00:08:55.392 13:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:55.392 13:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:55.392 13:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:55.392 13:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:55.392 13:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:55.392 13:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=131656 00:08:55.392 13:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 131656 00:08:55.392 13:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:55.392 13:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 131656 ']' 00:08:55.392 13:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.392 13:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:55.392 13:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.392 13:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:55.392 13:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:55.652 [2024-07-25 13:36:52.313525] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:55.652 [2024-07-25 13:36:52.313581] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.652 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.652 [2024-07-25 13:36:52.354464] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:55.652 [2024-07-25 13:36:52.390118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.652 [2024-07-25 13:36:52.428946] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:55.652 [2024-07-25 13:36:52.428987] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:55.652 [2024-07-25 13:36:52.428997] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:55.652 [2024-07-25 13:36:52.429005] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:55.652 [2024-07-25 13:36:52.429012] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:55.652 [2024-07-25 13:36:52.429037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.220 13:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:56.220 13:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:56.220 13:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:56.479 13:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:56.479 13:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:56.479 13:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:56.479 13:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:56.479 [2024-07-25 13:36:53.309398] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:56.479 [2024-07-25 13:36:53.309478] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:56.479 [2024-07-25 13:36:53.309503] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:56.479 13:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:56.479 13:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 28e72e0b-40a8-488d-b849-4143c0582f8a 00:08:56.479 13:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=28e72e0b-40a8-488d-b849-4143c0582f8a 00:08:56.479 13:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:56.479 13:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:56.479 13:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:56.479 13:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:56.479 13:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:56.737 13:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 28e72e0b-40a8-488d-b849-4143c0582f8a -t 2000 00:08:56.996 [ 00:08:56.996 { 00:08:56.996 "name": "28e72e0b-40a8-488d-b849-4143c0582f8a", 00:08:56.996 "aliases": [ 00:08:56.996 "lvs/lvol" 00:08:56.996 ], 00:08:56.996 "product_name": "Logical Volume", 00:08:56.996 "block_size": 4096, 00:08:56.996 "num_blocks": 38912, 00:08:56.996 "uuid": "28e72e0b-40a8-488d-b849-4143c0582f8a", 00:08:56.996 "assigned_rate_limits": { 00:08:56.996 "rw_ios_per_sec": 0, 00:08:56.996 "rw_mbytes_per_sec": 0, 00:08:56.996 "r_mbytes_per_sec": 0, 00:08:56.996 "w_mbytes_per_sec": 0 00:08:56.996 }, 00:08:56.996 "claimed": false, 00:08:56.996 "zoned": false, 00:08:56.996 "supported_io_types": { 00:08:56.996 "read": true, 00:08:56.996 "write": true, 00:08:56.996 "unmap": true, 00:08:56.996 "flush": false, 00:08:56.996 "reset": true, 00:08:56.996 "nvme_admin": false, 00:08:56.996 "nvme_io": false, 00:08:56.996 "nvme_io_md": false, 00:08:56.996 "write_zeroes": true, 00:08:56.996 "zcopy": false, 00:08:56.996 "get_zone_info": false, 00:08:56.996 "zone_management": false, 00:08:56.996 "zone_append": false, 00:08:56.996 "compare": false, 00:08:56.996 "compare_and_write": false, 00:08:56.996 "abort": false, 00:08:56.996 "seek_hole": true, 00:08:56.996 "seek_data": true, 00:08:56.996 "copy": false, 00:08:56.997 "nvme_iov_md": false 00:08:56.997 }, 00:08:56.997 "driver_specific": { 00:08:56.997 "lvol": { 00:08:56.997 "lvol_store_uuid": "34cc508b-87a5-4653-8822-a6fe3b9db1e4", 00:08:56.997 "base_bdev": "aio_bdev", 00:08:56.997 "thin_provision": false, 00:08:56.997 "num_allocated_clusters": 38, 00:08:56.997 "snapshot": false, 00:08:56.997 "clone": false, 00:08:56.997 "esnap_clone": false 00:08:56.997 } 00:08:56.997 } 00:08:56.997 } 00:08:56.997 ] 00:08:56.997 13:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:56.997 13:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34cc508b-87a5-4653-8822-a6fe3b9db1e4 00:08:56.997 13:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:56.997 13:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:56.997 13:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34cc508b-87a5-4653-8822-a6fe3b9db1e4 00:08:56.997 13:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:57.255 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:57.255 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:57.515 [2024-07-25 13:36:54.153958] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:57.515 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34cc508b-87a5-4653-8822-a6fe3b9db1e4 00:08:57.515 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:57.515 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34cc508b-87a5-4653-8822-a6fe3b9db1e4 00:08:57.515 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:57.515 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:57.515 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:57.515 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:57.515 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:57.515 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:57.515 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:57.515 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:57.515 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34cc508b-87a5-4653-8822-a6fe3b9db1e4 00:08:57.515 request: 00:08:57.515 { 00:08:57.515 "uuid": "34cc508b-87a5-4653-8822-a6fe3b9db1e4", 00:08:57.515 "method": "bdev_lvol_get_lvstores", 00:08:57.515 "req_id": 1 00:08:57.515 } 00:08:57.515 Got JSON-RPC error response 00:08:57.515 response: 00:08:57.515 { 00:08:57.515 "code": -19, 00:08:57.515 "message": "No such device" 00:08:57.515 } 00:08:57.515 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:57.515 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:57.515 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:57.515 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:57.515 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:57.774 aio_bdev 00:08:57.774 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 28e72e0b-40a8-488d-b849-4143c0582f8a 00:08:57.774 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=28e72e0b-40a8-488d-b849-4143c0582f8a 00:08:57.774 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:57.774 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:57.775 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:57.775 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:57.775 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:58.034 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 28e72e0b-40a8-488d-b849-4143c0582f8a -t 2000 00:08:58.034 [ 00:08:58.034 { 00:08:58.034 "name": "28e72e0b-40a8-488d-b849-4143c0582f8a", 00:08:58.034 "aliases": [ 00:08:58.034 "lvs/lvol" 00:08:58.034 ], 00:08:58.034 "product_name": "Logical Volume", 00:08:58.034 "block_size": 4096, 00:08:58.034 "num_blocks": 38912, 00:08:58.034 "uuid": "28e72e0b-40a8-488d-b849-4143c0582f8a", 00:08:58.034 "assigned_rate_limits": { 00:08:58.034 "rw_ios_per_sec": 0, 00:08:58.034 "rw_mbytes_per_sec": 0, 00:08:58.034 "r_mbytes_per_sec": 0, 00:08:58.034 "w_mbytes_per_sec": 0 00:08:58.034 }, 00:08:58.034 "claimed": false, 00:08:58.034 "zoned": false, 00:08:58.034 "supported_io_types": { 00:08:58.034 "read": true, 00:08:58.034 "write": true, 00:08:58.034 "unmap": true, 00:08:58.034 "flush": false, 00:08:58.034 "reset": true, 00:08:58.034 "nvme_admin": false, 00:08:58.034 "nvme_io": false, 00:08:58.034 "nvme_io_md": false, 00:08:58.034 "write_zeroes": true, 00:08:58.034 "zcopy": false, 00:08:58.034 "get_zone_info": false, 00:08:58.034 "zone_management": false, 00:08:58.034 "zone_append": false, 00:08:58.034 "compare": false, 00:08:58.034 "compare_and_write": false, 00:08:58.034 "abort": false, 00:08:58.034 "seek_hole": true, 00:08:58.034 "seek_data": true, 00:08:58.034 "copy": false, 00:08:58.034 "nvme_iov_md": false 00:08:58.034 }, 00:08:58.034 "driver_specific": { 00:08:58.034 "lvol": { 00:08:58.034 "lvol_store_uuid": "34cc508b-87a5-4653-8822-a6fe3b9db1e4", 00:08:58.034 "base_bdev": "aio_bdev", 00:08:58.034 "thin_provision": false, 00:08:58.034 "num_allocated_clusters": 38, 00:08:58.034 "snapshot": false, 00:08:58.034 "clone": false, 00:08:58.034 "esnap_clone": false 00:08:58.034 } 00:08:58.034 } 00:08:58.034 } 00:08:58.034 ] 00:08:58.034 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:58.034 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34cc508b-87a5-4653-8822-a6fe3b9db1e4 00:08:58.034 13:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:58.294 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:58.294 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34cc508b-87a5-4653-8822-a6fe3b9db1e4 00:08:58.294 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:58.553 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:58.553 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 28e72e0b-40a8-488d-b849-4143c0582f8a 00:08:58.553 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 34cc508b-87a5-4653-8822-a6fe3b9db1e4 00:08:58.812 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:59.070 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:59.071 00:08:59.071 real 0m16.879s 00:08:59.071 user 0m41.662s 00:08:59.071 sys 0m5.001s 00:08:59.071 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.071 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:59.071 ************************************ 00:08:59.071 END TEST lvs_grow_dirty 00:08:59.071 ************************************ 00:08:59.071 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:59.071 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:59.071 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:59.071 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:59.071 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:59.071 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:59.071 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:59.071 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:59.071 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:59.071 nvmf_trace.0 00:08:59.071 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:59.071 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:59.071 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:59.071 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:59.071 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:59.071 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:59.071 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:59.071 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:59.071 rmmod nvme_tcp 00:08:59.071 rmmod nvme_fabrics 00:08:59.071 rmmod nvme_keyring 00:08:59.071 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:59.071 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:59.071 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:59.071 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 131656 ']' 00:08:59.071 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 131656 00:08:59.071 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 131656 ']' 00:08:59.071 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 131656 00:08:59.071 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:59.329 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:59.329 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 131656 00:08:59.329 13:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:59.329 13:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:59.329 13:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 131656' 00:08:59.329 killing process with pid 131656 00:08:59.329 13:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 131656 00:08:59.329 13:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 131656 00:08:59.329 13:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:59.329 13:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:59.329 13:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:59.329 13:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:59.329 13:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:59.330 13:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.330 13:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.330 13:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.866 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:01.866 00:09:01.866 real 0m42.481s 00:09:01.866 user 1m1.796s 00:09:01.866 sys 0m12.599s 00:09:01.866 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.866 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:01.866 ************************************ 00:09:01.866 END TEST nvmf_lvs_grow 00:09:01.866 ************************************ 00:09:01.866 13:36:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:01.866 13:36:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:01.866 13:36:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.866 13:36:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:01.866 ************************************ 00:09:01.866 START TEST nvmf_bdev_io_wait 00:09:01.866 ************************************ 00:09:01.866 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:01.866 * Looking for test storage... 00:09:01.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:09:01.867 13:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.440 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.440 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:09:08.440 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:08.440 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:08.440 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:08.440 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:08.440 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:08.440 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:09:08.440 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:08.440 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:09:08.440 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:09:08.440 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:09:08.440 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:09:08.440 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:09:08.440 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:09:08.440 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.440 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.440 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.440 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.440 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.440 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.440 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:08.441 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:08.441 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:08.441 Found net devices under 0000:af:00.0: cvl_0_0 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:08.441 Found net devices under 0000:af:00.1: cvl_0_1 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:08.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:09:08.441 00:09:08.441 --- 10.0.0.2 ping statistics --- 00:09:08.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.441 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:09:08.441 00:09:08.441 --- 10.0.0.1 ping statistics --- 00:09:08.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.441 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=136065 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 136065 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 136065 ']' 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.441 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.441 [2024-07-25 13:37:04.946430] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:09:08.442 [2024-07-25 13:37:04.946479] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.442 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.442 [2024-07-25 13:37:04.986261] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:08.442 [2024-07-25 13:37:05.021291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:08.442 [2024-07-25 13:37:05.062062] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.442 [2024-07-25 13:37:05.062110] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.442 [2024-07-25 13:37:05.062119] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.442 [2024-07-25 13:37:05.062128] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.442 [2024-07-25 13:37:05.062136] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.442 [2024-07-25 13:37:05.062185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.442 [2024-07-25 13:37:05.062278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.442 [2024-07-25 13:37:05.062366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:08.442 [2024-07-25 13:37:05.062367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.053 [2024-07-25 13:37:05.872621] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.053 Malloc0 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.053 [2024-07-25 13:37:05.934935] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.053 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=136213 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=136215 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:09.313 { 00:09:09.313 "params": { 00:09:09.313 "name": "Nvme$subsystem", 00:09:09.313 "trtype": "$TEST_TRANSPORT", 00:09:09.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:09.313 "adrfam": "ipv4", 00:09:09.313 "trsvcid": "$NVMF_PORT", 00:09:09.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:09.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:09.313 "hdgst": ${hdgst:-false}, 00:09:09.313 "ddgst": ${ddgst:-false} 00:09:09.313 }, 00:09:09.313 "method": "bdev_nvme_attach_controller" 00:09:09.313 } 00:09:09.313 EOF 00:09:09.313 )") 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=136217 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:09.313 { 00:09:09.313 "params": { 00:09:09.313 "name": "Nvme$subsystem", 00:09:09.313 "trtype": "$TEST_TRANSPORT", 00:09:09.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:09.313 "adrfam": "ipv4", 00:09:09.313 "trsvcid": "$NVMF_PORT", 00:09:09.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:09.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:09.313 "hdgst": ${hdgst:-false}, 00:09:09.313 "ddgst": ${ddgst:-false} 00:09:09.313 }, 00:09:09.313 "method": "bdev_nvme_attach_controller" 00:09:09.313 } 00:09:09.313 EOF 00:09:09.313 )") 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=136220 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:09.313 { 00:09:09.313 "params": { 00:09:09.313 "name": "Nvme$subsystem", 00:09:09.313 "trtype": "$TEST_TRANSPORT", 00:09:09.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:09.313 "adrfam": "ipv4", 00:09:09.313 "trsvcid": "$NVMF_PORT", 00:09:09.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:09.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:09.313 "hdgst": ${hdgst:-false}, 00:09:09.313 "ddgst": ${ddgst:-false} 00:09:09.313 }, 00:09:09.313 "method": "bdev_nvme_attach_controller" 00:09:09.313 } 00:09:09.313 EOF 00:09:09.313 )") 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:09.313 { 00:09:09.313 "params": { 00:09:09.313 "name": "Nvme$subsystem", 00:09:09.313 "trtype": "$TEST_TRANSPORT", 00:09:09.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:09.313 "adrfam": "ipv4", 00:09:09.313 "trsvcid": "$NVMF_PORT", 00:09:09.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:09.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:09.313 "hdgst": ${hdgst:-false}, 00:09:09.313 "ddgst": ${ddgst:-false} 00:09:09.313 }, 00:09:09.313 "method": "bdev_nvme_attach_controller" 00:09:09.313 } 00:09:09.313 EOF 00:09:09.313 )") 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 136213 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:09.313 "params": { 00:09:09.313 "name": "Nvme1", 00:09:09.313 "trtype": "tcp", 00:09:09.313 "traddr": "10.0.0.2", 00:09:09.313 "adrfam": "ipv4", 00:09:09.313 "trsvcid": "4420", 00:09:09.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:09.313 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:09.313 "hdgst": false, 00:09:09.313 "ddgst": false 00:09:09.313 }, 00:09:09.313 "method": "bdev_nvme_attach_controller" 00:09:09.313 }' 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:09.313 "params": { 00:09:09.313 "name": "Nvme1", 00:09:09.313 "trtype": "tcp", 00:09:09.313 "traddr": "10.0.0.2", 00:09:09.313 "adrfam": "ipv4", 00:09:09.313 "trsvcid": "4420", 00:09:09.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:09.313 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:09.313 "hdgst": false, 00:09:09.313 "ddgst": false 00:09:09.313 }, 00:09:09.313 "method": "bdev_nvme_attach_controller" 00:09:09.313 }' 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:09.313 "params": { 00:09:09.313 "name": "Nvme1", 00:09:09.313 "trtype": "tcp", 00:09:09.313 "traddr": "10.0.0.2", 00:09:09.313 "adrfam": "ipv4", 00:09:09.313 "trsvcid": "4420", 00:09:09.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:09.313 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:09.313 "hdgst": false, 00:09:09.313 "ddgst": false 00:09:09.313 }, 00:09:09.313 "method": "bdev_nvme_attach_controller" 00:09:09.313 }' 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:09.313 13:37:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:09.313 "params": { 00:09:09.313 "name": "Nvme1", 00:09:09.313 "trtype": "tcp", 00:09:09.313 "traddr": "10.0.0.2", 00:09:09.313 "adrfam": "ipv4", 00:09:09.314 "trsvcid": "4420", 00:09:09.314 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:09.314 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:09.314 "hdgst": false, 00:09:09.314 "ddgst": false 00:09:09.314 }, 00:09:09.314 "method": "bdev_nvme_attach_controller" 00:09:09.314 }' 00:09:09.314 [2024-07-25 13:37:05.987961] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:09:09.314 [2024-07-25 13:37:05.987962] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:09:09.314 [2024-07-25 13:37:05.988017] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:09.314 [2024-07-25 13:37:05.988025] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:09.314 [2024-07-25 13:37:05.990213] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:09:09.314 [2024-07-25 13:37:05.990257] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:09.314 [2024-07-25 13:37:05.993698] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:09:09.314 [2024-07-25 13:37:05.993766] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:09.314 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.314 [2024-07-25 13:37:06.126950] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:09.314 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.314 [2024-07-25 13:37:06.178268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.573 [2024-07-25 13:37:06.204155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:09.573 [2024-07-25 13:37:06.218489] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:09.573 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.573 [2024-07-25 13:37:06.269935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.573 [2024-07-25 13:37:06.295527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:09.573 [2024-07-25 13:37:06.315601] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:09.573 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.573 [2024-07-25 13:37:06.363140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.573 [2024-07-25 13:37:06.377309] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:09.573 [2024-07-25 13:37:06.393053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:09.573 [2024-07-25 13:37:06.409970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.573 [2024-07-25 13:37:06.435995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:09.832 Running I/O for 1 seconds... 00:09:09.832 Running I/O for 1 seconds... 00:09:09.832 Running I/O for 1 seconds... 00:09:10.090 Running I/O for 1 seconds... 00:09:11.025 00:09:11.025 Latency(us) 00:09:11.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.025 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:11.025 Nvme1n1 : 1.01 12464.93 48.69 0.00 0.00 10236.27 5583.67 20132.66 00:09:11.025 =================================================================================================================== 00:09:11.025 Total : 12464.93 48.69 0.00 0.00 10236.27 5583.67 20132.66 00:09:11.025 00:09:11.025 Latency(us) 00:09:11.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.025 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:11.025 Nvme1n1 : 1.01 10220.23 39.92 0.00 0.00 12472.93 7654.60 22020.10 00:09:11.025 =================================================================================================================== 00:09:11.025 Total : 10220.23 39.92 0.00 0.00 12472.93 7654.60 22020.10 00:09:11.025 00:09:11.025 Latency(us) 00:09:11.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.025 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:11.025 Nvme1n1 : 1.00 12252.09 47.86 0.00 0.00 10418.78 4823.45 23697.82 00:09:11.025 =================================================================================================================== 00:09:11.025 Total : 12252.09 47.86 0.00 0.00 10418.78 4823.45 23697.82 00:09:11.025 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 136215 00:09:11.025 00:09:11.025 Latency(us) 00:09:11.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.025 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:11.025 Nvme1n1 : 1.00 257623.08 1006.34 0.00 0.00 495.31 211.35 619.32 00:09:11.025 =================================================================================================================== 00:09:11.025 Total : 257623.08 1006.34 0.00 0.00 495.31 211.35 619.32 00:09:11.025 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 136217 00:09:11.285 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 136220 00:09:11.285 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.285 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.285 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.285 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.285 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:11.285 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:11.285 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:11.285 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:11.285 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:11.285 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:11.285 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:11.285 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:11.285 rmmod nvme_tcp 00:09:11.285 rmmod nvme_fabrics 00:09:11.285 rmmod nvme_keyring 00:09:11.285 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:11.285 13:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:11.285 13:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:11.285 13:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 136065 ']' 00:09:11.285 13:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 136065 00:09:11.285 13:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 136065 ']' 00:09:11.285 13:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 136065 00:09:11.285 13:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:11.285 13:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:11.285 13:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 136065 00:09:11.285 13:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:11.285 13:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:11.285 13:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 136065' 00:09:11.285 killing process with pid 136065 00:09:11.285 13:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 136065 00:09:11.285 13:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 136065 00:09:11.545 13:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:11.545 13:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:11.545 13:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:11.545 13:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:11.545 13:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:11.545 13:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.545 13:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.545 13:37:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.489 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:13.489 00:09:13.489 real 0m11.958s 00:09:13.489 user 0m19.119s 00:09:13.489 sys 0m6.981s 00:09:13.489 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.489 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.489 ************************************ 00:09:13.489 END TEST nvmf_bdev_io_wait 00:09:13.489 ************************************ 00:09:13.489 13:37:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:13.489 13:37:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:13.489 13:37:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.489 13:37:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:13.748 ************************************ 00:09:13.749 START TEST nvmf_queue_depth 00:09:13.749 ************************************ 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:13.749 * Looking for test storage... 00:09:13.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:09:13.749 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:20.370 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:20.370 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:20.370 Found net devices under 0000:af:00.0: cvl_0_0 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:20.370 Found net devices under 0000:af:00.1: cvl_0_1 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:20.370 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:20.630 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:20.630 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:20.630 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:20.630 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:20.630 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:20.630 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:20.630 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:20.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:09:20.889 00:09:20.889 --- 10.0.0.2 ping statistics --- 00:09:20.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.889 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:09:20.889 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:20.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:09:20.889 00:09:20.889 --- 10.0.0.1 ping statistics --- 00:09:20.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.889 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:09:20.889 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.889 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:09:20.889 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:20.889 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.889 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:20.889 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:20.889 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.889 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:20.889 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:20.889 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:20.889 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:20.889 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:20.889 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.889 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=140438 00:09:20.889 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 140438 00:09:20.889 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:20.889 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 140438 ']' 00:09:20.889 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.889 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:20.889 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.889 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:20.889 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.889 [2024-07-25 13:37:17.631028] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:09:20.889 [2024-07-25 13:37:17.631086] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.889 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.889 [2024-07-25 13:37:17.672216] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:20.889 [2024-07-25 13:37:17.707196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.889 [2024-07-25 13:37:17.746382] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.889 [2024-07-25 13:37:17.746424] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.889 [2024-07-25 13:37:17.746433] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.889 [2024-07-25 13:37:17.746442] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.889 [2024-07-25 13:37:17.746449] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.889 [2024-07-25 13:37:17.746469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.826 [2024-07-25 13:37:18.480597] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.826 Malloc0 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.826 [2024-07-25 13:37:18.552351] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=140525 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 140525 /var/tmp/bdevperf.sock 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 140525 ']' 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:21.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:21.826 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.826 [2024-07-25 13:37:18.603455] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:09:21.826 [2024-07-25 13:37:18.603504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140525 ] 00:09:21.826 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.826 [2024-07-25 13:37:18.639991] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:21.826 [2024-07-25 13:37:18.674298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.085 [2024-07-25 13:37:18.713820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.085 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:22.085 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:22.085 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:22.085 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.085 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:22.344 NVMe0n1 00:09:22.344 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.344 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:22.344 Running I/O for 10 seconds... 00:09:32.318 00:09:32.318 Latency(us) 00:09:32.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.318 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:32.318 Verification LBA range: start 0x0 length 0x4000 00:09:32.318 NVMe0n1 : 10.05 13060.69 51.02 0.00 0.00 78158.80 9594.47 54106.52 00:09:32.318 =================================================================================================================== 00:09:32.318 Total : 13060.69 51.02 0.00 0.00 78158.80 9594.47 54106.52 00:09:32.318 0 00:09:32.318 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 140525 00:09:32.318 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 140525 ']' 00:09:32.318 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 140525 00:09:32.318 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:32.318 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:32.318 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 140525 00:09:32.577 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:32.577 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:32.577 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 140525' 00:09:32.577 killing process with pid 140525 00:09:32.577 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 140525 00:09:32.577 Received shutdown signal, test time was about 10.000000 seconds 00:09:32.577 00:09:32.577 Latency(us) 00:09:32.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.577 =================================================================================================================== 00:09:32.577 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:32.577 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 140525 00:09:32.577 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:32.577 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:32.577 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:32.577 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:32.577 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:32.577 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:32.577 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:32.577 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:32.577 rmmod nvme_tcp 00:09:32.577 rmmod nvme_fabrics 00:09:32.577 rmmod nvme_keyring 00:09:32.837 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:32.837 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:32.837 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:32.837 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 140438 ']' 00:09:32.837 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 140438 00:09:32.837 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 140438 ']' 00:09:32.837 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 140438 00:09:32.837 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:32.837 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:32.837 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 140438 00:09:32.837 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:32.837 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:32.837 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 140438' 00:09:32.837 killing process with pid 140438 00:09:32.837 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 140438 00:09:32.837 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 140438 00:09:33.095 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:33.095 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:33.095 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:33.095 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:33.095 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:33.095 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.095 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.095 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.007 13:37:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:35.007 00:09:35.007 real 0m21.421s 00:09:35.007 user 0m23.586s 00:09:35.007 sys 0m7.434s 00:09:35.007 13:37:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:35.007 13:37:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.007 ************************************ 00:09:35.007 END TEST nvmf_queue_depth 00:09:35.007 ************************************ 00:09:35.007 13:37:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:35.007 13:37:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:35.007 13:37:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:35.007 13:37:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:35.267 ************************************ 00:09:35.267 START TEST nvmf_target_multipath 00:09:35.267 ************************************ 00:09:35.267 13:37:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:35.267 * Looking for test storage... 00:09:35.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:09:35.267 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:41.881 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:41.881 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.881 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:41.882 Found net devices under 0000:af:00.0: cvl_0_0 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:41.882 Found net devices under 0000:af:00.1: cvl_0_1 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:41.882 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:42.141 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:42.141 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:42.141 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:42.141 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:42.141 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:42.141 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:42.141 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:42.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:09:42.141 00:09:42.141 --- 10.0.0.2 ping statistics --- 00:09:42.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.141 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:09:42.141 13:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:42.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:09:42.141 00:09:42.141 --- 10.0.0.1 ping statistics --- 00:09:42.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.141 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:09:42.141 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.141 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:09:42.141 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:42.141 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.141 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:42.141 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:42.141 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.141 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:42.141 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:42.401 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:42.401 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:42.401 only one NIC for nvmf test 00:09:42.401 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:42.401 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:42.401 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:42.401 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:42.401 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:42.401 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:42.401 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:42.401 rmmod nvme_tcp 00:09:42.401 rmmod nvme_fabrics 00:09:42.401 rmmod nvme_keyring 00:09:42.401 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:42.401 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:42.401 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:42.401 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:42.401 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:42.401 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:42.401 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:42.401 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:42.401 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:42.401 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.401 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.401 13:37:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.307 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:44.307 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:44.307 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:44.307 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:44.307 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:44.307 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:44.307 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:44.566 00:09:44.566 real 0m9.327s 00:09:44.566 user 0m1.960s 00:09:44.566 sys 0m5.366s 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:44.566 ************************************ 00:09:44.566 END TEST nvmf_target_multipath 00:09:44.566 ************************************ 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:44.566 ************************************ 00:09:44.566 START TEST nvmf_zcopy 00:09:44.566 ************************************ 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:44.566 * Looking for test storage... 00:09:44.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.566 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.567 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.567 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:44.567 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.567 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:44.567 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:44.825 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:44.825 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.825 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.825 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.825 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:44.825 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:44.825 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:44.826 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:44.826 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:44.826 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.826 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:44.826 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:44.826 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:44.826 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.826 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.826 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.826 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:44.826 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:44.826 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:09:44.826 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:51.393 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:51.393 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:51.393 Found net devices under 0000:af:00.0: cvl_0_0 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:51.393 Found net devices under 0000:af:00.1: cvl_0_1 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:51.393 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:51.653 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:51.653 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:51.653 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:51.653 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:51.653 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:51.653 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:51.912 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:51.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:09:51.912 00:09:51.912 --- 10.0.0.2 ping statistics --- 00:09:51.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.912 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:09:51.912 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:51.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:09:51.912 00:09:51.912 --- 10.0.0.1 ping statistics --- 00:09:51.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.912 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:09:51.912 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.912 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:09:51.912 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:51.912 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.912 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:51.912 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:51.912 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.912 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:51.912 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:51.912 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:51.912 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:51.912 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:51.912 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.912 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=149924 00:09:51.912 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:51.912 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 149924 00:09:51.912 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 149924 ']' 00:09:51.912 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.912 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:51.912 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.912 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:51.912 13:37:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.912 [2024-07-25 13:37:48.658127] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:09:51.912 [2024-07-25 13:37:48.658175] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.912 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.912 [2024-07-25 13:37:48.698013] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:51.912 [2024-07-25 13:37:48.732869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.912 [2024-07-25 13:37:48.770366] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.912 [2024-07-25 13:37:48.770404] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.912 [2024-07-25 13:37:48.770415] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.912 [2024-07-25 13:37:48.770425] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.912 [2024-07-25 13:37:48.770433] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.912 [2024-07-25 13:37:48.770456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.848 [2024-07-25 13:37:49.496324] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.848 [2024-07-25 13:37:49.512493] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.848 malloc0 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:52.848 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:52.848 { 00:09:52.848 "params": { 00:09:52.848 "name": "Nvme$subsystem", 00:09:52.848 "trtype": "$TEST_TRANSPORT", 00:09:52.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:52.848 "adrfam": "ipv4", 00:09:52.848 "trsvcid": "$NVMF_PORT", 00:09:52.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:52.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:52.849 "hdgst": ${hdgst:-false}, 00:09:52.849 "ddgst": ${ddgst:-false} 00:09:52.849 }, 00:09:52.849 "method": "bdev_nvme_attach_controller" 00:09:52.849 } 00:09:52.849 EOF 00:09:52.849 )") 00:09:52.849 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:52.849 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:52.849 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:52.849 13:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:52.849 "params": { 00:09:52.849 "name": "Nvme1", 00:09:52.849 "trtype": "tcp", 00:09:52.849 "traddr": "10.0.0.2", 00:09:52.849 "adrfam": "ipv4", 00:09:52.849 "trsvcid": "4420", 00:09:52.849 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:52.849 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:52.849 "hdgst": false, 00:09:52.849 "ddgst": false 00:09:52.849 }, 00:09:52.849 "method": "bdev_nvme_attach_controller" 00:09:52.849 }' 00:09:52.849 [2024-07-25 13:37:49.614020] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:09:52.849 [2024-07-25 13:37:49.614070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150060 ] 00:09:52.849 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.849 [2024-07-25 13:37:49.652653] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:52.849 [2024-07-25 13:37:49.687018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.849 [2024-07-25 13:37:49.725885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.417 Running I/O for 10 seconds... 00:10:03.390 00:10:03.390 Latency(us) 00:10:03.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.390 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:03.390 Verification LBA range: start 0x0 length 0x1000 00:10:03.390 Nvme1n1 : 10.01 8905.74 69.58 0.00 0.00 14332.81 1664.61 32086.43 00:10:03.390 =================================================================================================================== 00:10:03.390 Total : 8905.74 69.58 0.00 0.00 14332.81 1664.61 32086.43 00:10:03.390 13:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=151810 00:10:03.390 13:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:03.390 13:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.390 13:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:03.390 13:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:03.390 13:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:03.390 13:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:03.390 13:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:03.390 13:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:03.390 { 00:10:03.390 "params": { 00:10:03.390 "name": "Nvme$subsystem", 00:10:03.390 "trtype": "$TEST_TRANSPORT", 00:10:03.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.390 "adrfam": "ipv4", 00:10:03.390 "trsvcid": "$NVMF_PORT", 00:10:03.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.390 "hdgst": ${hdgst:-false}, 00:10:03.390 "ddgst": ${ddgst:-false} 00:10:03.390 }, 00:10:03.390 "method": "bdev_nvme_attach_controller" 00:10:03.390 } 00:10:03.390 EOF 00:10:03.390 )") 00:10:03.390 [2024-07-25 13:38:00.206607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.390 [2024-07-25 13:38:00.206642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.390 13:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:03.390 13:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:03.390 13:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:03.390 13:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:03.390 "params": { 00:10:03.390 "name": "Nvme1", 00:10:03.390 "trtype": "tcp", 00:10:03.390 "traddr": "10.0.0.2", 00:10:03.390 "adrfam": "ipv4", 00:10:03.390 "trsvcid": "4420", 00:10:03.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.390 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.390 "hdgst": false, 00:10:03.390 "ddgst": false 00:10:03.390 }, 00:10:03.390 "method": "bdev_nvme_attach_controller" 00:10:03.390 }' 00:10:03.390 [2024-07-25 13:38:00.218600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.390 [2024-07-25 13:38:00.218614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.390 [2024-07-25 13:38:00.230629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.390 [2024-07-25 13:38:00.230642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.390 [2024-07-25 13:38:00.242659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.390 [2024-07-25 13:38:00.242673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.390 [2024-07-25 13:38:00.245094] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:10:03.390 [2024-07-25 13:38:00.245142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151810 ] 00:10:03.390 [2024-07-25 13:38:00.254692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.390 [2024-07-25 13:38:00.254705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.390 [2024-07-25 13:38:00.266740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.390 [2024-07-25 13:38:00.266759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.649 [2024-07-25 13:38:00.278758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.649 [2024-07-25 13:38:00.278771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.649 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.649 [2024-07-25 13:38:00.283407] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:03.649 [2024-07-25 13:38:00.290784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.649 [2024-07-25 13:38:00.290796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.649 [2024-07-25 13:38:00.302815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.649 [2024-07-25 13:38:00.302827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.649 [2024-07-25 13:38:00.314848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.649 [2024-07-25 13:38:00.314859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.649 [2024-07-25 13:38:00.317873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.649 [2024-07-25 13:38:00.326882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.649 [2024-07-25 13:38:00.326896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.649 [2024-07-25 13:38:00.338920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.649 [2024-07-25 13:38:00.338943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.649 [2024-07-25 13:38:00.350948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.649 [2024-07-25 13:38:00.350963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.649 [2024-07-25 13:38:00.357031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.649 [2024-07-25 13:38:00.362980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.649 [2024-07-25 13:38:00.362994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.649 [2024-07-25 13:38:00.375023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.649 [2024-07-25 13:38:00.375045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.649 [2024-07-25 13:38:00.387051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.649 [2024-07-25 13:38:00.387065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.649 [2024-07-25 13:38:00.399079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.649 [2024-07-25 13:38:00.399091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.649 [2024-07-25 13:38:00.411108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.649 [2024-07-25 13:38:00.411121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.649 [2024-07-25 13:38:00.423142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.649 [2024-07-25 13:38:00.423155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.649 [2024-07-25 13:38:00.435171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.649 [2024-07-25 13:38:00.435182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.649 [2024-07-25 13:38:00.447228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.649 [2024-07-25 13:38:00.447250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.649 [2024-07-25 13:38:00.459243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.649 [2024-07-25 13:38:00.459259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.649 [2024-07-25 13:38:00.471278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.649 [2024-07-25 13:38:00.471296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.649 [2024-07-25 13:38:00.483303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.649 [2024-07-25 13:38:00.483315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.649 [2024-07-25 13:38:00.495335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.649 [2024-07-25 13:38:00.495348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.649 [2024-07-25 13:38:00.507372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.649 [2024-07-25 13:38:00.507388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.649 [2024-07-25 13:38:00.519407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.649 [2024-07-25 13:38:00.519423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.649 [2024-07-25 13:38:00.531440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.649 [2024-07-25 13:38:00.531454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.907 [2024-07-25 13:38:00.543477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.907 [2024-07-25 13:38:00.543497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.907 Running I/O for 5 seconds... 00:10:03.907 [2024-07-25 13:38:00.555502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.907 [2024-07-25 13:38:00.555514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.907 [2024-07-25 13:38:00.569315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.907 [2024-07-25 13:38:00.569337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.907 [2024-07-25 13:38:00.584061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.907 [2024-07-25 13:38:00.584083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.907 [2024-07-25 13:38:00.597608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.907 [2024-07-25 13:38:00.597632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.907 [2024-07-25 13:38:00.611336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.907 [2024-07-25 13:38:00.611357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.907 [2024-07-25 13:38:00.625148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.907 [2024-07-25 13:38:00.625169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.907 [2024-07-25 13:38:00.638086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.907 [2024-07-25 13:38:00.638106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.907 [2024-07-25 13:38:00.651391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.907 [2024-07-25 13:38:00.651411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.907 [2024-07-25 13:38:00.664959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.907 [2024-07-25 13:38:00.664979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.907 [2024-07-25 13:38:00.678735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.907 [2024-07-25 13:38:00.678756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.907 [2024-07-25 13:38:00.692374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.907 [2024-07-25 13:38:00.692396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.907 [2024-07-25 13:38:00.706073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.907 [2024-07-25 13:38:00.706094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.907 [2024-07-25 13:38:00.719686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.907 [2024-07-25 13:38:00.719707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.907 [2024-07-25 13:38:00.732855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.907 [2024-07-25 13:38:00.732876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.907 [2024-07-25 13:38:00.746129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.907 [2024-07-25 13:38:00.746149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.907 [2024-07-25 13:38:00.759623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.907 [2024-07-25 13:38:00.759643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.907 [2024-07-25 13:38:00.773203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.907 [2024-07-25 13:38:00.773223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.907 [2024-07-25 13:38:00.787141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.907 [2024-07-25 13:38:00.787161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.165 [2024-07-25 13:38:00.799167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.165 [2024-07-25 13:38:00.799187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.165 [2024-07-25 13:38:00.813331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.165 [2024-07-25 13:38:00.813355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.165 [2024-07-25 13:38:00.828987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.165 [2024-07-25 13:38:00.829007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.165 [2024-07-25 13:38:00.843211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.165 [2024-07-25 13:38:00.843232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.165 [2024-07-25 13:38:00.854286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.165 [2024-07-25 13:38:00.854306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.165 [2024-07-25 13:38:00.869720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.165 [2024-07-25 13:38:00.869740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.165 [2024-07-25 13:38:00.884106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.165 [2024-07-25 13:38:00.884126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.165 [2024-07-25 13:38:00.897371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.165 [2024-07-25 13:38:00.897391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.165 [2024-07-25 13:38:00.911071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.165 [2024-07-25 13:38:00.911092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.165 [2024-07-25 13:38:00.925253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.165 [2024-07-25 13:38:00.925273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.165 [2024-07-25 13:38:00.938936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.165 [2024-07-25 13:38:00.938956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.165 [2024-07-25 13:38:00.953422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.165 [2024-07-25 13:38:00.953441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.165 [2024-07-25 13:38:00.968459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.165 [2024-07-25 13:38:00.968479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.165 [2024-07-25 13:38:00.982060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.165 [2024-07-25 13:38:00.982080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.165 [2024-07-25 13:38:00.996096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.165 [2024-07-25 13:38:00.996116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.165 [2024-07-25 13:38:01.008410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.165 [2024-07-25 13:38:01.008431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.165 [2024-07-25 13:38:01.022052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.165 [2024-07-25 13:38:01.022072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.165 [2024-07-25 13:38:01.035461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.165 [2024-07-25 13:38:01.035481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.165 [2024-07-25 13:38:01.048710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.165 [2024-07-25 13:38:01.048734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.423 [2024-07-25 13:38:01.062896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.423 [2024-07-25 13:38:01.062917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.423 [2024-07-25 13:38:01.074669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.423 [2024-07-25 13:38:01.074689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.423 [2024-07-25 13:38:01.087983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.423 [2024-07-25 13:38:01.088003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.423 [2024-07-25 13:38:01.102629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.423 [2024-07-25 13:38:01.102649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.423 [2024-07-25 13:38:01.117615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.423 [2024-07-25 13:38:01.117635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.423 [2024-07-25 13:38:01.131128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.423 [2024-07-25 13:38:01.131147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.423 [2024-07-25 13:38:01.144479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.423 [2024-07-25 13:38:01.144499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.423 [2024-07-25 13:38:01.158672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.423 [2024-07-25 13:38:01.158691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.423 [2024-07-25 13:38:01.169700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.423 [2024-07-25 13:38:01.169724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.423 [2024-07-25 13:38:01.183642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.423 [2024-07-25 13:38:01.183661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.423 [2024-07-25 13:38:01.197180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.423 [2024-07-25 13:38:01.197200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.423 [2024-07-25 13:38:01.210748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.423 [2024-07-25 13:38:01.210768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.423 [2024-07-25 13:38:01.223718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.423 [2024-07-25 13:38:01.223739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.423 [2024-07-25 13:38:01.238151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.423 [2024-07-25 13:38:01.238175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.423 [2024-07-25 13:38:01.253947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.423 [2024-07-25 13:38:01.253967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.423 [2024-07-25 13:38:01.267446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.423 [2024-07-25 13:38:01.267466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.423 [2024-07-25 13:38:01.280868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.424 [2024-07-25 13:38:01.280887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.424 [2024-07-25 13:38:01.294821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.424 [2024-07-25 13:38:01.294841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.424 [2024-07-25 13:38:01.305518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.424 [2024-07-25 13:38:01.305540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.683 [2024-07-25 13:38:01.319781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.683 [2024-07-25 13:38:01.319802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.683 [2024-07-25 13:38:01.332694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.683 [2024-07-25 13:38:01.332720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.683 [2024-07-25 13:38:01.346056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.683 [2024-07-25 13:38:01.346076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.683 [2024-07-25 13:38:01.359595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.683 [2024-07-25 13:38:01.359614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.683 [2024-07-25 13:38:01.372908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.683 [2024-07-25 13:38:01.372928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.683 [2024-07-25 13:38:01.386235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.683 [2024-07-25 13:38:01.386255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.683 [2024-07-25 13:38:01.399849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.683 [2024-07-25 13:38:01.399869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.683 [2024-07-25 13:38:01.413033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.683 [2024-07-25 13:38:01.413052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.683 [2024-07-25 13:38:01.426508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.683 [2024-07-25 13:38:01.426527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.683 [2024-07-25 13:38:01.439636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.683 [2024-07-25 13:38:01.439655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.683 [2024-07-25 13:38:01.453038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.683 [2024-07-25 13:38:01.453057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.683 [2024-07-25 13:38:01.466292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.683 [2024-07-25 13:38:01.466311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.683 [2024-07-25 13:38:01.479745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.683 [2024-07-25 13:38:01.479764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.683 [2024-07-25 13:38:01.493392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.683 [2024-07-25 13:38:01.493416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.683 [2024-07-25 13:38:01.506687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.683 [2024-07-25 13:38:01.506707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.683 [2024-07-25 13:38:01.520422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.683 [2024-07-25 13:38:01.520442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.683 [2024-07-25 13:38:01.533833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.683 [2024-07-25 13:38:01.533853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.683 [2024-07-25 13:38:01.547353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.683 [2024-07-25 13:38:01.547373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.683 [2024-07-25 13:38:01.560321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.683 [2024-07-25 13:38:01.560341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.942 [2024-07-25 13:38:01.574169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.942 [2024-07-25 13:38:01.574189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.942 [2024-07-25 13:38:01.587135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.942 [2024-07-25 13:38:01.587154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.942 [2024-07-25 13:38:01.600235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.942 [2024-07-25 13:38:01.600255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.942 [2024-07-25 13:38:01.613693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.942 [2024-07-25 13:38:01.613712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.942 [2024-07-25 13:38:01.627113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.942 [2024-07-25 13:38:01.627132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.942 [2024-07-25 13:38:01.640772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.942 [2024-07-25 13:38:01.640792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.942 [2024-07-25 13:38:01.654298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.942 [2024-07-25 13:38:01.654318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.942 [2024-07-25 13:38:01.668068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.942 [2024-07-25 13:38:01.668088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.942 [2024-07-25 13:38:01.681295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.942 [2024-07-25 13:38:01.681315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.942 [2024-07-25 13:38:01.694930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.942 [2024-07-25 13:38:01.694950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.942 [2024-07-25 13:38:01.708200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.942 [2024-07-25 13:38:01.708220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.942 [2024-07-25 13:38:01.722243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.942 [2024-07-25 13:38:01.722263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.942 [2024-07-25 13:38:01.733354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.942 [2024-07-25 13:38:01.733374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.942 [2024-07-25 13:38:01.747333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.942 [2024-07-25 13:38:01.747357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.942 [2024-07-25 13:38:01.760786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.943 [2024-07-25 13:38:01.760807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.943 [2024-07-25 13:38:01.774275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.943 [2024-07-25 13:38:01.774295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.943 [2024-07-25 13:38:01.787505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.943 [2024-07-25 13:38:01.787526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.943 [2024-07-25 13:38:01.801167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.943 [2024-07-25 13:38:01.801188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.943 [2024-07-25 13:38:01.814441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.943 [2024-07-25 13:38:01.814465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.943 [2024-07-25 13:38:01.828357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.943 [2024-07-25 13:38:01.828379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.249 [2024-07-25 13:38:01.841880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.249 [2024-07-25 13:38:01.841902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.249 [2024-07-25 13:38:01.855264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.249 [2024-07-25 13:38:01.855285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.249 [2024-07-25 13:38:01.868775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.249 [2024-07-25 13:38:01.868795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.249 [2024-07-25 13:38:01.882489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.249 [2024-07-25 13:38:01.882510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.249 [2024-07-25 13:38:01.895973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.249 [2024-07-25 13:38:01.895993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.249 [2024-07-25 13:38:01.909746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.249 [2024-07-25 13:38:01.909766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.249 [2024-07-25 13:38:01.923160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.249 [2024-07-25 13:38:01.923180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.249 [2024-07-25 13:38:01.936476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.249 [2024-07-25 13:38:01.936497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.249 [2024-07-25 13:38:01.949896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.249 [2024-07-25 13:38:01.949916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.249 [2024-07-25 13:38:01.963215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.249 [2024-07-25 13:38:01.963235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.249 [2024-07-25 13:38:01.976735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.249 [2024-07-25 13:38:01.976756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.249 [2024-07-25 13:38:01.990151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.249 [2024-07-25 13:38:01.990171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.249 [2024-07-25 13:38:02.003406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.249 [2024-07-25 13:38:02.003430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.249 [2024-07-25 13:38:02.017112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.249 [2024-07-25 13:38:02.017133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.249 [2024-07-25 13:38:02.030272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.249 [2024-07-25 13:38:02.030293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.249 [2024-07-25 13:38:02.043205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.249 [2024-07-25 13:38:02.043227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.249 [2024-07-25 13:38:02.056186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.249 [2024-07-25 13:38:02.056208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.249 [2024-07-25 13:38:02.069234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.249 [2024-07-25 13:38:02.069255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.249 [2024-07-25 13:38:02.082977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.249 [2024-07-25 13:38:02.082997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.249 [2024-07-25 13:38:02.096517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.249 [2024-07-25 13:38:02.096538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.249 [2024-07-25 13:38:02.110142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.249 [2024-07-25 13:38:02.110162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.249 [2024-07-25 13:38:02.123565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.249 [2024-07-25 13:38:02.123590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.526 [2024-07-25 13:38:02.137163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.526 [2024-07-25 13:38:02.137185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.526 [2024-07-25 13:38:02.150416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.526 [2024-07-25 13:38:02.150435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.526 [2024-07-25 13:38:02.163725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.526 [2024-07-25 13:38:02.163745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.526 [2024-07-25 13:38:02.177229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.526 [2024-07-25 13:38:02.177249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.526 [2024-07-25 13:38:02.190414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.526 [2024-07-25 13:38:02.190434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.526 [2024-07-25 13:38:02.203933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.526 [2024-07-25 13:38:02.203953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.526 [2024-07-25 13:38:02.217621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.526 [2024-07-25 13:38:02.217641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.526 [2024-07-25 13:38:02.231194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.526 [2024-07-25 13:38:02.231214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.526 [2024-07-25 13:38:02.243896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.526 [2024-07-25 13:38:02.243916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.526 [2024-07-25 13:38:02.257227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.526 [2024-07-25 13:38:02.257251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.526 [2024-07-25 13:38:02.270764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.526 [2024-07-25 13:38:02.270785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.526 [2024-07-25 13:38:02.283971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.526 [2024-07-25 13:38:02.283991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.526 [2024-07-25 13:38:02.297479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.526 [2024-07-25 13:38:02.297499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.526 [2024-07-25 13:38:02.310707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.526 [2024-07-25 13:38:02.310732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.526 [2024-07-25 13:38:02.323773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.527 [2024-07-25 13:38:02.323791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.527 [2024-07-25 13:38:02.337656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.527 [2024-07-25 13:38:02.337675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.527 [2024-07-25 13:38:02.351395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.527 [2024-07-25 13:38:02.351415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.527 [2024-07-25 13:38:02.365378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.527 [2024-07-25 13:38:02.365398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.527 [2024-07-25 13:38:02.375641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.527 [2024-07-25 13:38:02.375660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.527 [2024-07-25 13:38:02.389383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.527 [2024-07-25 13:38:02.389402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.527 [2024-07-25 13:38:02.403069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.527 [2024-07-25 13:38:02.403089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.785 [2024-07-25 13:38:02.417102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.785 [2024-07-25 13:38:02.417121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.785 [2024-07-25 13:38:02.432534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.785 [2024-07-25 13:38:02.432553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.785 [2024-07-25 13:38:02.446499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.785 [2024-07-25 13:38:02.446519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.785 [2024-07-25 13:38:02.461382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.785 [2024-07-25 13:38:02.461401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.785 [2024-07-25 13:38:02.475710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.785 [2024-07-25 13:38:02.475734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.785 [2024-07-25 13:38:02.489578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.785 [2024-07-25 13:38:02.489598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.785 [2024-07-25 13:38:02.503436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.785 [2024-07-25 13:38:02.503456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.785 [2024-07-25 13:38:02.516347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.785 [2024-07-25 13:38:02.516366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.785 [2024-07-25 13:38:02.531554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.785 [2024-07-25 13:38:02.531574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.785 [2024-07-25 13:38:02.545204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.785 [2024-07-25 13:38:02.545225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.785 [2024-07-25 13:38:02.558780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.785 [2024-07-25 13:38:02.558800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.785 [2024-07-25 13:38:02.571931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.785 [2024-07-25 13:38:02.571951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.785 [2024-07-25 13:38:02.584861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.785 [2024-07-25 13:38:02.584881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.785 [2024-07-25 13:38:02.598674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.785 [2024-07-25 13:38:02.598693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.785 [2024-07-25 13:38:02.614321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.785 [2024-07-25 13:38:02.614340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.785 [2024-07-25 13:38:02.628254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.785 [2024-07-25 13:38:02.628273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.786 [2024-07-25 13:38:02.642419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.786 [2024-07-25 13:38:02.642438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.786 [2024-07-25 13:38:02.656610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.786 [2024-07-25 13:38:02.656632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.786 [2024-07-25 13:38:02.670521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.786 [2024-07-25 13:38:02.670540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.045 [2024-07-25 13:38:02.686487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.045 [2024-07-25 13:38:02.686507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.045 [2024-07-25 13:38:02.700316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.045 [2024-07-25 13:38:02.700335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.045 [2024-07-25 13:38:02.713682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.045 [2024-07-25 13:38:02.713702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.045 [2024-07-25 13:38:02.728015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.045 [2024-07-25 13:38:02.728034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.045 [2024-07-25 13:38:02.743431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.045 [2024-07-25 13:38:02.743460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.045 [2024-07-25 13:38:02.757165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.045 [2024-07-25 13:38:02.757184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.045 [2024-07-25 13:38:02.771599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.045 [2024-07-25 13:38:02.771619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.045 [2024-07-25 13:38:02.786246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.045 [2024-07-25 13:38:02.786266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.045 [2024-07-25 13:38:02.800247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.045 [2024-07-25 13:38:02.800266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.045 [2024-07-25 13:38:02.813370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.045 [2024-07-25 13:38:02.813390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.045 [2024-07-25 13:38:02.826835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.045 [2024-07-25 13:38:02.826854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.045 [2024-07-25 13:38:02.840741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.045 [2024-07-25 13:38:02.840761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.045 [2024-07-25 13:38:02.854253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.045 [2024-07-25 13:38:02.854273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.045 [2024-07-25 13:38:02.867307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.045 [2024-07-25 13:38:02.867327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.045 [2024-07-25 13:38:02.880860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.045 [2024-07-25 13:38:02.880879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.045 [2024-07-25 13:38:02.894280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.045 [2024-07-25 13:38:02.894300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.045 [2024-07-25 13:38:02.907613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.045 [2024-07-25 13:38:02.907633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.045 [2024-07-25 13:38:02.921126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.045 [2024-07-25 13:38:02.921147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.304 [2024-07-25 13:38:02.935760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.304 [2024-07-25 13:38:02.935780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.304 [2024-07-25 13:38:02.950186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.304 [2024-07-25 13:38:02.950206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.304 [2024-07-25 13:38:02.964294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.304 [2024-07-25 13:38:02.964315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.304 [2024-07-25 13:38:02.976191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.304 [2024-07-25 13:38:02.976211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.304 [2024-07-25 13:38:02.989993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.304 [2024-07-25 13:38:02.990013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.304 [2024-07-25 13:38:03.003982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.304 [2024-07-25 13:38:03.004002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.304 [2024-07-25 13:38:03.015802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.304 [2024-07-25 13:38:03.015822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.304 [2024-07-25 13:38:03.029870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.304 [2024-07-25 13:38:03.029890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.304 [2024-07-25 13:38:03.044072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.304 [2024-07-25 13:38:03.044093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.304 [2024-07-25 13:38:03.055848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.304 [2024-07-25 13:38:03.055868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.304 [2024-07-25 13:38:03.069314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.304 [2024-07-25 13:38:03.069335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.304 [2024-07-25 13:38:03.082483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.304 [2024-07-25 13:38:03.082504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.304 [2024-07-25 13:38:03.096032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.304 [2024-07-25 13:38:03.096052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.304 [2024-07-25 13:38:03.109144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.304 [2024-07-25 13:38:03.109164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.304 [2024-07-25 13:38:03.122439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.304 [2024-07-25 13:38:03.122458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.304 [2024-07-25 13:38:03.135617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.304 [2024-07-25 13:38:03.135636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.304 [2024-07-25 13:38:03.148885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.304 [2024-07-25 13:38:03.148904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.304 [2024-07-25 13:38:03.162286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.304 [2024-07-25 13:38:03.162305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.304 [2024-07-25 13:38:03.175712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.304 [2024-07-25 13:38:03.175737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.304 [2024-07-25 13:38:03.189139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.304 [2024-07-25 13:38:03.189159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.563 [2024-07-25 13:38:03.202414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.563 [2024-07-25 13:38:03.202435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.563 [2024-07-25 13:38:03.215667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.563 [2024-07-25 13:38:03.215686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.563 [2024-07-25 13:38:03.229066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.563 [2024-07-25 13:38:03.229086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.563 [2024-07-25 13:38:03.242429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.563 [2024-07-25 13:38:03.242449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.563 [2024-07-25 13:38:03.256129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.563 [2024-07-25 13:38:03.256149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.563 [2024-07-25 13:38:03.269175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.563 [2024-07-25 13:38:03.269194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.563 [2024-07-25 13:38:03.282838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.563 [2024-07-25 13:38:03.282861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.563 [2024-07-25 13:38:03.296409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.563 [2024-07-25 13:38:03.296428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.563 [2024-07-25 13:38:03.309782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.563 [2024-07-25 13:38:03.309802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.563 [2024-07-25 13:38:03.323062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.563 [2024-07-25 13:38:03.323082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.563 [2024-07-25 13:38:03.336611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.563 [2024-07-25 13:38:03.336631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.563 [2024-07-25 13:38:03.351021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.563 [2024-07-25 13:38:03.351042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.563 [2024-07-25 13:38:03.366022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.563 [2024-07-25 13:38:03.366042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.563 [2024-07-25 13:38:03.379353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.563 [2024-07-25 13:38:03.379374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.563 [2024-07-25 13:38:03.392923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.563 [2024-07-25 13:38:03.392945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.563 [2024-07-25 13:38:03.406203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.563 [2024-07-25 13:38:03.406224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.563 [2024-07-25 13:38:03.419369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.563 [2024-07-25 13:38:03.419390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.563 [2024-07-25 13:38:03.432441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.563 [2024-07-25 13:38:03.432461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.563 [2024-07-25 13:38:03.446243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.563 [2024-07-25 13:38:03.446264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.821 [2024-07-25 13:38:03.459665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.821 [2024-07-25 13:38:03.459686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.821 [2024-07-25 13:38:03.473005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.821 [2024-07-25 13:38:03.473025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.821 [2024-07-25 13:38:03.486408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.821 [2024-07-25 13:38:03.486428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.821 [2024-07-25 13:38:03.500118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.821 [2024-07-25 13:38:03.500139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.821 [2024-07-25 13:38:03.513647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.821 [2024-07-25 13:38:03.513667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.821 [2024-07-25 13:38:03.527117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.821 [2024-07-25 13:38:03.527137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.821 [2024-07-25 13:38:03.540550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.821 [2024-07-25 13:38:03.540574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.821 [2024-07-25 13:38:03.554230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.821 [2024-07-25 13:38:03.554250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.821 [2024-07-25 13:38:03.567420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.821 [2024-07-25 13:38:03.567441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.821 [2024-07-25 13:38:03.580813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.821 [2024-07-25 13:38:03.580833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.821 [2024-07-25 13:38:03.594882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.822 [2024-07-25 13:38:03.594903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.822 [2024-07-25 13:38:03.608309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.822 [2024-07-25 13:38:03.608329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.822 [2024-07-25 13:38:03.621308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.822 [2024-07-25 13:38:03.621329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.822 [2024-07-25 13:38:03.634799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.822 [2024-07-25 13:38:03.634819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.822 [2024-07-25 13:38:03.648195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.822 [2024-07-25 13:38:03.648215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.822 [2024-07-25 13:38:03.661543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.822 [2024-07-25 13:38:03.661563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.822 [2024-07-25 13:38:03.675037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.822 [2024-07-25 13:38:03.675058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.822 [2024-07-25 13:38:03.688472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.822 [2024-07-25 13:38:03.688492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.822 [2024-07-25 13:38:03.701736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.822 [2024-07-25 13:38:03.701757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.080 [2024-07-25 13:38:03.715390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.080 [2024-07-25 13:38:03.715411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.080 [2024-07-25 13:38:03.728374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.080 [2024-07-25 13:38:03.728395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.080 [2024-07-25 13:38:03.742014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.080 [2024-07-25 13:38:03.742034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.080 [2024-07-25 13:38:03.755658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.080 [2024-07-25 13:38:03.755679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.080 [2024-07-25 13:38:03.768901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.080 [2024-07-25 13:38:03.768922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.080 [2024-07-25 13:38:03.782221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.080 [2024-07-25 13:38:03.782241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.080 [2024-07-25 13:38:03.795617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.080 [2024-07-25 13:38:03.795641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.080 [2024-07-25 13:38:03.809250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.080 [2024-07-25 13:38:03.809271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.080 [2024-07-25 13:38:03.822530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.080 [2024-07-25 13:38:03.822550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.081 [2024-07-25 13:38:03.836087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.081 [2024-07-25 13:38:03.836108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.081 [2024-07-25 13:38:03.849627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.081 [2024-07-25 13:38:03.849647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.081 [2024-07-25 13:38:03.863030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.081 [2024-07-25 13:38:03.863051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.081 [2024-07-25 13:38:03.876864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.081 [2024-07-25 13:38:03.876884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.081 [2024-07-25 13:38:03.887386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.081 [2024-07-25 13:38:03.887405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.081 [2024-07-25 13:38:03.901127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.081 [2024-07-25 13:38:03.901146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.081 [2024-07-25 13:38:03.915683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.081 [2024-07-25 13:38:03.915703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.081 [2024-07-25 13:38:03.931277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.081 [2024-07-25 13:38:03.931298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.081 [2024-07-25 13:38:03.944949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.081 [2024-07-25 13:38:03.944969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.081 [2024-07-25 13:38:03.958131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.081 [2024-07-25 13:38:03.958150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.340 [2024-07-25 13:38:03.971553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.340 [2024-07-25 13:38:03.971573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.340 [2024-07-25 13:38:03.984878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.340 [2024-07-25 13:38:03.984898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.340 [2024-07-25 13:38:03.998176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.340 [2024-07-25 13:38:03.998196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.340 [2024-07-25 13:38:04.012619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.340 [2024-07-25 13:38:04.012639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.340 [2024-07-25 13:38:04.027522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.340 [2024-07-25 13:38:04.027542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.340 [2024-07-25 13:38:04.041199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.340 [2024-07-25 13:38:04.041219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.340 [2024-07-25 13:38:04.054725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.340 [2024-07-25 13:38:04.054749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.340 [2024-07-25 13:38:04.069098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.340 [2024-07-25 13:38:04.069118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.340 [2024-07-25 13:38:04.084206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.340 [2024-07-25 13:38:04.084227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.340 [2024-07-25 13:38:04.097958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.340 [2024-07-25 13:38:04.097979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.340 [2024-07-25 13:38:04.112813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.340 [2024-07-25 13:38:04.112833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.340 [2024-07-25 13:38:04.127489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.340 [2024-07-25 13:38:04.127509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.340 [2024-07-25 13:38:04.141128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.340 [2024-07-25 13:38:04.141148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.340 [2024-07-25 13:38:04.154325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.340 [2024-07-25 13:38:04.154345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.340 [2024-07-25 13:38:04.168351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.340 [2024-07-25 13:38:04.168371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.340 [2024-07-25 13:38:04.182124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.340 [2024-07-25 13:38:04.182143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.340 [2024-07-25 13:38:04.193668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.340 [2024-07-25 13:38:04.193687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.340 [2024-07-25 13:38:04.207693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.340 [2024-07-25 13:38:04.207712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.340 [2024-07-25 13:38:04.219281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.340 [2024-07-25 13:38:04.219301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.600 [2024-07-25 13:38:04.232915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.600 [2024-07-25 13:38:04.232935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.600 [2024-07-25 13:38:04.246153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.600 [2024-07-25 13:38:04.246173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.600 [2024-07-25 13:38:04.260048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.600 [2024-07-25 13:38:04.260068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.600 [2024-07-25 13:38:04.273664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.600 [2024-07-25 13:38:04.273685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.600 [2024-07-25 13:38:04.287534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.600 [2024-07-25 13:38:04.287554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.600 [2024-07-25 13:38:04.300653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.600 [2024-07-25 13:38:04.300672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.600 [2024-07-25 13:38:04.314521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.600 [2024-07-25 13:38:04.314541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.600 [2024-07-25 13:38:04.327744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.600 [2024-07-25 13:38:04.327764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.600 [2024-07-25 13:38:04.341505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.600 [2024-07-25 13:38:04.341524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.600 [2024-07-25 13:38:04.354835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.600 [2024-07-25 13:38:04.354855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.600 [2024-07-25 13:38:04.368088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.600 [2024-07-25 13:38:04.368107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.600 [2024-07-25 13:38:04.381709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.600 [2024-07-25 13:38:04.381735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.600 [2024-07-25 13:38:04.395261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.600 [2024-07-25 13:38:04.395281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.600 [2024-07-25 13:38:04.408821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.600 [2024-07-25 13:38:04.408841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.600 [2024-07-25 13:38:04.422614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.600 [2024-07-25 13:38:04.422633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.600 [2024-07-25 13:38:04.438496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.600 [2024-07-25 13:38:04.438516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.600 [2024-07-25 13:38:04.452423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.600 [2024-07-25 13:38:04.452442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.600 [2024-07-25 13:38:04.464145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.600 [2024-07-25 13:38:04.464164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.600 [2024-07-25 13:38:04.477834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.600 [2024-07-25 13:38:04.477854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.860 [2024-07-25 13:38:04.490616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.860 [2024-07-25 13:38:04.490637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.860 [2024-07-25 13:38:04.505047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.860 [2024-07-25 13:38:04.505066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.860 [2024-07-25 13:38:04.520615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.860 [2024-07-25 13:38:04.520635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.860 [2024-07-25 13:38:04.534385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.860 [2024-07-25 13:38:04.534405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.860 [2024-07-25 13:38:04.548326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.860 [2024-07-25 13:38:04.548346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.860 [2024-07-25 13:38:04.559462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.860 [2024-07-25 13:38:04.559481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.860 [2024-07-25 13:38:04.573436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.860 [2024-07-25 13:38:04.573456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.860 [2024-07-25 13:38:04.586700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.860 [2024-07-25 13:38:04.586724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.860 [2024-07-25 13:38:04.600411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.860 [2024-07-25 13:38:04.600431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.860 [2024-07-25 13:38:04.614090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.860 [2024-07-25 13:38:04.614109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.860 [2024-07-25 13:38:04.627645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.860 [2024-07-25 13:38:04.627668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.860 [2024-07-25 13:38:04.641476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.860 [2024-07-25 13:38:04.641497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.860 [2024-07-25 13:38:04.655369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.860 [2024-07-25 13:38:04.655388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.860 [2024-07-25 13:38:04.669127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.860 [2024-07-25 13:38:04.669147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.860 [2024-07-25 13:38:04.682742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.860 [2024-07-25 13:38:04.682762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.860 [2024-07-25 13:38:04.696027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.860 [2024-07-25 13:38:04.696046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.860 [2024-07-25 13:38:04.710136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.860 [2024-07-25 13:38:04.710155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.860 [2024-07-25 13:38:04.721522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.860 [2024-07-25 13:38:04.721542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.860 [2024-07-25 13:38:04.735454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.860 [2024-07-25 13:38:04.735474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.120 [2024-07-25 13:38:04.749618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.120 [2024-07-25 13:38:04.749639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.120 [2024-07-25 13:38:04.760951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.120 [2024-07-25 13:38:04.760971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.120 [2024-07-25 13:38:04.774823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.120 [2024-07-25 13:38:04.774843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.120 [2024-07-25 13:38:04.788556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.120 [2024-07-25 13:38:04.788576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.120 [2024-07-25 13:38:04.802443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.120 [2024-07-25 13:38:04.802463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.120 [2024-07-25 13:38:04.818097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.120 [2024-07-25 13:38:04.818118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.120 [2024-07-25 13:38:04.831447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.120 [2024-07-25 13:38:04.831468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.120 [2024-07-25 13:38:04.844534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.120 [2024-07-25 13:38:04.844555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.120 [2024-07-25 13:38:04.857655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.120 [2024-07-25 13:38:04.857675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.120 [2024-07-25 13:38:04.870775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.120 [2024-07-25 13:38:04.870797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.120 [2024-07-25 13:38:04.884004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.120 [2024-07-25 13:38:04.884025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.120 [2024-07-25 13:38:04.897322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.120 [2024-07-25 13:38:04.897345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.120 [2024-07-25 13:38:04.910566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.120 [2024-07-25 13:38:04.910588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.120 [2024-07-25 13:38:04.924213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.120 [2024-07-25 13:38:04.924234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.120 [2024-07-25 13:38:04.937957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.120 [2024-07-25 13:38:04.937979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.120 [2024-07-25 13:38:04.951247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.120 [2024-07-25 13:38:04.951267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.120 [2024-07-25 13:38:04.964623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.120 [2024-07-25 13:38:04.964644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.120 [2024-07-25 13:38:04.978014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.120 [2024-07-25 13:38:04.978034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.120 [2024-07-25 13:38:04.991287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.120 [2024-07-25 13:38:04.991307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.120 [2024-07-25 13:38:05.004620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.120 [2024-07-25 13:38:05.004641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.380 [2024-07-25 13:38:05.018222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.380 [2024-07-25 13:38:05.018243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.380 [2024-07-25 13:38:05.031685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.380 [2024-07-25 13:38:05.031705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.380 [2024-07-25 13:38:05.044879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.380 [2024-07-25 13:38:05.044898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.380 [2024-07-25 13:38:05.058214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.380 [2024-07-25 13:38:05.058235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.380 [2024-07-25 13:38:05.071302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.380 [2024-07-25 13:38:05.071322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.380 [2024-07-25 13:38:05.084555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.380 [2024-07-25 13:38:05.084575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.380 [2024-07-25 13:38:05.097909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.380 [2024-07-25 13:38:05.097929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.380 [2024-07-25 13:38:05.111320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.380 [2024-07-25 13:38:05.111342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.380 [2024-07-25 13:38:05.124888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.380 [2024-07-25 13:38:05.124909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.380 [2024-07-25 13:38:05.137863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.380 [2024-07-25 13:38:05.137883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.380 [2024-07-25 13:38:05.151548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.380 [2024-07-25 13:38:05.151568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.380 [2024-07-25 13:38:05.165254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.380 [2024-07-25 13:38:05.165275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.380 [2024-07-25 13:38:05.178506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.380 [2024-07-25 13:38:05.178526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.380 [2024-07-25 13:38:05.192066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.380 [2024-07-25 13:38:05.192087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.380 [2024-07-25 13:38:05.205175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.380 [2024-07-25 13:38:05.205195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.380 [2024-07-25 13:38:05.218562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.380 [2024-07-25 13:38:05.218582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.380 [2024-07-25 13:38:05.231940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.380 [2024-07-25 13:38:05.231960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.380 [2024-07-25 13:38:05.245288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.380 [2024-07-25 13:38:05.245308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.380 [2024-07-25 13:38:05.258578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.380 [2024-07-25 13:38:05.258599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.639 [2024-07-25 13:38:05.271749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.639 [2024-07-25 13:38:05.271770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.639 [2024-07-25 13:38:05.285143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.639 [2024-07-25 13:38:05.285163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.639 [2024-07-25 13:38:05.298653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.639 [2024-07-25 13:38:05.298674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.639 [2024-07-25 13:38:05.312257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.639 [2024-07-25 13:38:05.312277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.639 [2024-07-25 13:38:05.325564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.639 [2024-07-25 13:38:05.325588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.639 [2024-07-25 13:38:05.338490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.639 [2024-07-25 13:38:05.338511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.639 [2024-07-25 13:38:05.352083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.639 [2024-07-25 13:38:05.352104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.639 [2024-07-25 13:38:05.365495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.639 [2024-07-25 13:38:05.365515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.639 [2024-07-25 13:38:05.379057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.639 [2024-07-25 13:38:05.379078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.639 [2024-07-25 13:38:05.392756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.639 [2024-07-25 13:38:05.392777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.639 [2024-07-25 13:38:05.406297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.639 [2024-07-25 13:38:05.406318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.639 [2024-07-25 13:38:05.419927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.639 [2024-07-25 13:38:05.419947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.639 [2024-07-25 13:38:05.433348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.639 [2024-07-25 13:38:05.433367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.639 [2024-07-25 13:38:05.446647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.639 [2024-07-25 13:38:05.446667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.639 [2024-07-25 13:38:05.459887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.639 [2024-07-25 13:38:05.459906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.639 [2024-07-25 13:38:05.473416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.639 [2024-07-25 13:38:05.473437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.639 [2024-07-25 13:38:05.486569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.639 [2024-07-25 13:38:05.486588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.639 [2024-07-25 13:38:05.500356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.639 [2024-07-25 13:38:05.500376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.639 [2024-07-25 13:38:05.513990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.639 [2024-07-25 13:38:05.514010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.899 [2024-07-25 13:38:05.527636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.899 [2024-07-25 13:38:05.527656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.899 [2024-07-25 13:38:05.540776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.899 [2024-07-25 13:38:05.540796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.899 [2024-07-25 13:38:05.554092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.899 [2024-07-25 13:38:05.554113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.899 [2024-07-25 13:38:05.567179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.899 [2024-07-25 13:38:05.567198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.899 00:10:08.899 Latency(us) 00:10:08.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.899 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:08.899 Nvme1n1 : 5.01 17448.36 136.32 0.00 0.00 7329.70 2542.80 19713.23 00:10:08.899 =================================================================================================================== 00:10:08.899 Total : 17448.36 136.32 0.00 0.00 7329.70 2542.80 19713.23 00:10:08.899 [2024-07-25 13:38:05.576681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.899 [2024-07-25 13:38:05.576700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.899 [2024-07-25 13:38:05.588710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.899 [2024-07-25 13:38:05.588729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.899 [2024-07-25 13:38:05.600753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.899 [2024-07-25 13:38:05.600770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.899 [2024-07-25 13:38:05.612780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.899 [2024-07-25 13:38:05.612795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.899 [2024-07-25 13:38:05.624809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.899 [2024-07-25 13:38:05.624822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.899 [2024-07-25 13:38:05.636838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.899 [2024-07-25 13:38:05.636850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.899 [2024-07-25 13:38:05.648869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.899 [2024-07-25 13:38:05.648882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.899 [2024-07-25 13:38:05.660903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.899 [2024-07-25 13:38:05.660916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.899 [2024-07-25 13:38:05.672939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.899 [2024-07-25 13:38:05.672956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.899 [2024-07-25 13:38:05.684965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.899 [2024-07-25 13:38:05.684976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.899 [2024-07-25 13:38:05.697000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.899 [2024-07-25 13:38:05.697012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.899 [2024-07-25 13:38:05.709028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.899 [2024-07-25 13:38:05.709040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.899 [2024-07-25 13:38:05.721061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.899 [2024-07-25 13:38:05.721072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.899 [2024-07-25 13:38:05.733092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.899 [2024-07-25 13:38:05.733103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (151810) - No such process 00:10:08.899 13:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 151810 00:10:08.899 13:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.899 13:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.899 13:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:08.899 13:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.899 13:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:08.899 13:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.899 13:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:08.899 delay0 00:10:08.899 13:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.899 13:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:08.899 13:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.899 13:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:08.899 13:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.899 13:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:09.158 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.158 [2024-07-25 13:38:05.870873] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:15.726 Initializing NVMe Controllers 00:10:15.726 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:15.726 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:15.726 Initialization complete. Launching workers. 00:10:15.726 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 57 00:10:15.726 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 334, failed to submit 43 00:10:15.726 success 125, unsuccess 209, failed 0 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:15.726 rmmod nvme_tcp 00:10:15.726 rmmod nvme_fabrics 00:10:15.726 rmmod nvme_keyring 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 149924 ']' 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 149924 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 149924 ']' 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 149924 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 149924 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 149924' 00:10:15.726 killing process with pid 149924 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 149924 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 149924 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.726 13:38:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.647 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:17.647 00:10:17.647 real 0m33.150s 00:10:17.647 user 0m42.187s 00:10:17.647 sys 0m13.484s 00:10:17.647 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:17.647 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.647 ************************************ 00:10:17.647 END TEST nvmf_zcopy 00:10:17.647 ************************************ 00:10:17.647 13:38:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:17.647 13:38:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:17.647 13:38:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:17.647 13:38:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:17.906 ************************************ 00:10:17.906 START TEST nvmf_nmic 00:10:17.906 ************************************ 00:10:17.906 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:17.906 * Looking for test storage... 00:10:17.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.906 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:17.906 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:17.906 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.906 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.906 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.906 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.906 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:10:17.907 13:38:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:24.477 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:24.477 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:24.477 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:24.478 Found net devices under 0000:af:00.0: cvl_0_0 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:24.478 Found net devices under 0000:af:00.1: cvl_0_1 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:24.478 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:24.738 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:24.738 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:24.738 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:24.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:24.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:10:24.738 00:10:24.738 --- 10.0.0.2 ping statistics --- 00:10:24.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.738 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:10:24.738 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:24.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:24.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:10:24.738 00:10:24.738 --- 10.0.0.1 ping statistics --- 00:10:24.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.738 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:10:24.738 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:24.738 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:10:24.738 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:24.738 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:24.738 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:24.738 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:24.738 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:24.738 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:24.738 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:24.738 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:24.738 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:24.738 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:24.738 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:24.738 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=157588 00:10:24.738 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:24.738 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 157588 00:10:24.738 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 157588 ']' 00:10:24.738 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.738 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:24.738 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.738 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:24.738 13:38:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:24.738 [2024-07-25 13:38:21.589509] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:10:24.738 [2024-07-25 13:38:21.589556] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.998 EAL: No free 2048 kB hugepages reported on node 1 00:10:24.998 [2024-07-25 13:38:21.630218] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:24.998 [2024-07-25 13:38:21.664042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:24.998 [2024-07-25 13:38:21.703448] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:24.998 [2024-07-25 13:38:21.703488] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:24.998 [2024-07-25 13:38:21.703498] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:24.998 [2024-07-25 13:38:21.703506] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:24.998 [2024-07-25 13:38:21.703513] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:24.998 [2024-07-25 13:38:21.703567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.998 [2024-07-25 13:38:21.703665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:24.998 [2024-07-25 13:38:21.703731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:24.998 [2024-07-25 13:38:21.703736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.566 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:25.566 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:25.566 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:25.566 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:25.566 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:25.566 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:25.566 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:25.566 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.566 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:25.566 [2024-07-25 13:38:22.451232] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:25.826 Malloc0 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:25.826 [2024-07-25 13:38:22.505721] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:25.826 test case1: single bdev can't be used in multiple subsystems 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:25.826 [2024-07-25 13:38:22.529605] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:25.826 [2024-07-25 13:38:22.529626] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:25.826 [2024-07-25 13:38:22.529640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.826 request: 00:10:25.826 { 00:10:25.826 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:25.826 "namespace": { 00:10:25.826 "bdev_name": "Malloc0", 00:10:25.826 "no_auto_visible": false 00:10:25.826 }, 00:10:25.826 "method": "nvmf_subsystem_add_ns", 00:10:25.826 "req_id": 1 00:10:25.826 } 00:10:25.826 Got JSON-RPC error response 00:10:25.826 response: 00:10:25.826 { 00:10:25.826 "code": -32602, 00:10:25.826 "message": "Invalid parameters" 00:10:25.826 } 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:25.826 Adding namespace failed - expected result. 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:25.826 test case2: host connect to nvmf target in multiple paths 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:25.826 [2024-07-25 13:38:22.545751] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.826 13:38:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:27.230 13:38:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:28.607 13:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:28.607 13:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:28.607 13:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:28.607 13:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:28.607 13:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:30.561 13:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:30.561 13:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:30.561 13:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:30.561 13:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:30.561 13:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:30.561 13:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:30.561 13:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:30.561 [global] 00:10:30.561 thread=1 00:10:30.561 invalidate=1 00:10:30.561 rw=write 00:10:30.561 time_based=1 00:10:30.561 runtime=1 00:10:30.561 ioengine=libaio 00:10:30.561 direct=1 00:10:30.561 bs=4096 00:10:30.561 iodepth=1 00:10:30.561 norandommap=0 00:10:30.561 numjobs=1 00:10:30.561 00:10:30.561 verify_dump=1 00:10:30.561 verify_backlog=512 00:10:30.561 verify_state_save=0 00:10:30.561 do_verify=1 00:10:30.561 verify=crc32c-intel 00:10:30.561 [job0] 00:10:30.561 filename=/dev/nvme0n1 00:10:30.561 Could not set queue depth (nvme0n1) 00:10:30.817 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.817 fio-3.35 00:10:30.817 Starting 1 thread 00:10:31.747 00:10:31.747 job0: (groupid=0, jobs=1): err= 0: pid=158811: Thu Jul 25 13:38:28 2024 00:10:31.747 read: IOPS=1025, BW=4104KiB/s (4202kB/s)(4120KiB/1004msec) 00:10:31.747 slat (nsec): min=8625, max=31488, avg=9377.17, stdev=1623.77 00:10:31.747 clat (usec): min=323, max=41509, avg=590.97, stdev=2833.28 00:10:31.747 lat (usec): min=332, max=41520, avg=600.35, stdev=2833.98 00:10:31.747 clat percentiles (usec): 00:10:31.747 | 1.00th=[ 347], 5.00th=[ 355], 10.00th=[ 355], 20.00th=[ 359], 00:10:31.747 | 30.00th=[ 379], 40.00th=[ 392], 50.00th=[ 396], 60.00th=[ 396], 00:10:31.747 | 70.00th=[ 400], 80.00th=[ 404], 90.00th=[ 433], 95.00th=[ 449], 00:10:31.747 | 99.00th=[ 529], 99.50th=[ 660], 99.90th=[41681], 99.95th=[41681], 00:10:31.747 | 99.99th=[41681] 00:10:31.747 write: IOPS=1529, BW=6120KiB/s (6266kB/s)(6144KiB/1004msec); 0 zone resets 00:10:31.747 slat (nsec): min=9136, max=39877, avg=12344.84, stdev=2132.66 00:10:31.747 clat (usec): min=173, max=588, avg=235.00, stdev=33.00 00:10:31.747 lat (usec): min=185, max=624, avg=247.34, stdev=33.71 00:10:31.747 clat percentiles (usec): 00:10:31.747 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:10:31.747 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 227], 60.00th=[ 239], 00:10:31.747 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 273], 95.00th=[ 277], 00:10:31.747 | 99.00th=[ 293], 99.50th=[ 302], 99.90th=[ 562], 99.95th=[ 586], 00:10:31.747 | 99.99th=[ 586] 00:10:31.747 bw ( KiB/s): min= 4096, max= 8192, per=100.00%, avg=6144.00, stdev=2896.31, samples=2 00:10:31.747 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:10:31.747 lat (usec) : 250=39.05%, 500=59.94%, 750=0.82% 00:10:31.747 lat (msec) : 50=0.19% 00:10:31.747 cpu : usr=1.40%, sys=3.19%, ctx=2566, majf=0, minf=2 00:10:31.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.747 issued rwts: total=1030,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.747 00:10:31.747 Run status group 0 (all jobs): 00:10:31.747 READ: bw=4104KiB/s (4202kB/s), 4104KiB/s-4104KiB/s (4202kB/s-4202kB/s), io=4120KiB (4219kB), run=1004-1004msec 00:10:31.747 WRITE: bw=6120KiB/s (6266kB/s), 6120KiB/s-6120KiB/s (6266kB/s-6266kB/s), io=6144KiB (6291kB), run=1004-1004msec 00:10:31.747 00:10:31.747 Disk stats (read/write): 00:10:31.747 nvme0n1: ios=1076/1536, merge=0/0, ticks=740/353, in_queue=1093, util=96.99% 00:10:32.005 13:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:32.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:32.005 13:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:32.005 13:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:32.005 13:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:32.005 13:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:32.005 13:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:32.005 13:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:32.005 13:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:32.005 13:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:32.005 13:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:32.005 13:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:32.005 13:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:32.005 13:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:32.005 13:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:32.005 13:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:32.005 13:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:32.263 rmmod nvme_tcp 00:10:32.263 rmmod nvme_fabrics 00:10:32.263 rmmod nvme_keyring 00:10:32.263 13:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:32.263 13:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:32.263 13:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:32.263 13:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 157588 ']' 00:10:32.263 13:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 157588 00:10:32.263 13:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 157588 ']' 00:10:32.263 13:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 157588 00:10:32.263 13:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:32.263 13:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:32.263 13:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 157588 00:10:32.263 13:38:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:32.263 13:38:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:32.263 13:38:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 157588' 00:10:32.263 killing process with pid 157588 00:10:32.263 13:38:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 157588 00:10:32.263 13:38:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 157588 00:10:32.522 13:38:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:32.522 13:38:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:32.522 13:38:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:32.522 13:38:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:32.522 13:38:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:32.522 13:38:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.522 13:38:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.522 13:38:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.418 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:34.418 00:10:34.418 real 0m16.746s 00:10:34.418 user 0m39.951s 00:10:34.418 sys 0m6.297s 00:10:34.418 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:34.418 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:34.418 ************************************ 00:10:34.418 END TEST nvmf_nmic 00:10:34.418 ************************************ 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:34.675 ************************************ 00:10:34.675 START TEST nvmf_fio_target 00:10:34.675 ************************************ 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:34.675 * Looking for test storage... 00:10:34.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:10:34.675 13:38:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.224 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:41.224 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:10:41.224 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:41.224 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:41.224 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:41.224 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:41.224 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:41.224 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:10:41.224 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:41.224 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:10:41.224 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:10:41.224 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:10:41.224 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:10:41.224 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:41.225 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:41.225 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:41.225 Found net devices under 0000:af:00.0: cvl_0_0 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:41.225 Found net devices under 0000:af:00.1: cvl_0_1 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:41.225 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:41.482 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:41.482 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:41.482 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:41.482 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:41.482 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:41.482 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:41.482 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:41.740 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:41.740 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:41.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:10:41.740 00:10:41.740 --- 10.0.0.2 ping statistics --- 00:10:41.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.740 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:10:41.740 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:41.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:10:41.740 00:10:41.740 --- 10.0.0.1 ping statistics --- 00:10:41.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.740 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:10:41.740 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.740 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:10:41.740 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:41.740 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.740 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:41.740 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:41.740 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.740 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:41.740 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:41.740 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:41.740 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:41.740 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:41.740 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.740 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=162765 00:10:41.740 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 162765 00:10:41.740 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:41.740 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 162765 ']' 00:10:41.740 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.740 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:41.740 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.740 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:41.740 13:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.740 [2024-07-25 13:38:38.507730] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:10:41.740 [2024-07-25 13:38:38.507790] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.740 EAL: No free 2048 kB hugepages reported on node 1 00:10:41.740 [2024-07-25 13:38:38.549895] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:41.740 [2024-07-25 13:38:38.584938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:41.740 [2024-07-25 13:38:38.626099] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.740 [2024-07-25 13:38:38.626141] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.740 [2024-07-25 13:38:38.626151] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.740 [2024-07-25 13:38:38.626159] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.740 [2024-07-25 13:38:38.626167] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.740 [2024-07-25 13:38:38.626207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.740 [2024-07-25 13:38:38.626226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:41.740 [2024-07-25 13:38:38.626304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:41.740 [2024-07-25 13:38:38.626305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.669 13:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:42.669 13:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:42.669 13:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:42.669 13:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:42.669 13:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.669 13:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.669 13:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:42.669 [2024-07-25 13:38:39.532533] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:42.925 13:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:42.925 13:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:42.925 13:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.182 13:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:43.182 13:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.438 13:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:43.438 13:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.695 13:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:43.695 13:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:43.695 13:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.951 13:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:43.951 13:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:44.208 13:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:44.208 13:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:44.465 13:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:44.465 13:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:44.465 13:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:44.721 13:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:44.721 13:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:44.977 13:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:44.978 13:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:44.978 13:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:45.236 [2024-07-25 13:38:41.968790] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:45.236 13:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:45.504 13:38:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:45.504 13:38:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:46.877 13:38:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:46.877 13:38:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:46.877 13:38:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:46.877 13:38:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:46.877 13:38:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:46.877 13:38:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:49.401 13:38:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:49.401 13:38:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:49.401 13:38:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:49.401 13:38:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:49.401 13:38:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:49.401 13:38:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:49.401 13:38:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:49.401 [global] 00:10:49.401 thread=1 00:10:49.402 invalidate=1 00:10:49.402 rw=write 00:10:49.402 time_based=1 00:10:49.402 runtime=1 00:10:49.402 ioengine=libaio 00:10:49.402 direct=1 00:10:49.402 bs=4096 00:10:49.402 iodepth=1 00:10:49.402 norandommap=0 00:10:49.402 numjobs=1 00:10:49.402 00:10:49.402 verify_dump=1 00:10:49.402 verify_backlog=512 00:10:49.402 verify_state_save=0 00:10:49.402 do_verify=1 00:10:49.402 verify=crc32c-intel 00:10:49.402 [job0] 00:10:49.402 filename=/dev/nvme0n1 00:10:49.402 [job1] 00:10:49.402 filename=/dev/nvme0n2 00:10:49.402 [job2] 00:10:49.402 filename=/dev/nvme0n3 00:10:49.402 [job3] 00:10:49.402 filename=/dev/nvme0n4 00:10:49.402 Could not set queue depth (nvme0n1) 00:10:49.402 Could not set queue depth (nvme0n2) 00:10:49.402 Could not set queue depth (nvme0n3) 00:10:49.402 Could not set queue depth (nvme0n4) 00:10:49.402 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:49.402 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:49.402 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:49.402 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:49.402 fio-3.35 00:10:49.402 Starting 4 threads 00:10:50.776 00:10:50.776 job0: (groupid=0, jobs=1): err= 0: pid=164304: Thu Jul 25 13:38:47 2024 00:10:50.776 read: IOPS=1052, BW=4212KiB/s (4313kB/s)(4296KiB/1020msec) 00:10:50.776 slat (nsec): min=8597, max=40330, avg=9715.58, stdev=2040.98 00:10:50.776 clat (usec): min=248, max=41016, avg=563.39, stdev=1779.02 00:10:50.776 lat (usec): min=257, max=41029, avg=573.11, stdev=1779.21 00:10:50.776 clat percentiles (usec): 00:10:50.776 | 1.00th=[ 265], 5.00th=[ 289], 10.00th=[ 347], 20.00th=[ 457], 00:10:50.776 | 30.00th=[ 482], 40.00th=[ 498], 50.00th=[ 506], 60.00th=[ 510], 00:10:50.776 | 70.00th=[ 515], 80.00th=[ 523], 90.00th=[ 529], 95.00th=[ 545], 00:10:50.776 | 99.00th=[ 594], 99.50th=[ 627], 99.90th=[41157], 99.95th=[41157], 00:10:50.776 | 99.99th=[41157] 00:10:50.776 write: IOPS=1505, BW=6024KiB/s (6168kB/s)(6144KiB/1020msec); 0 zone resets 00:10:50.776 slat (usec): min=7, max=594, avg=13.59, stdev=17.42 00:10:50.776 clat (usec): min=138, max=3063, avg=244.79, stdev=101.30 00:10:50.776 lat (usec): min=189, max=3191, avg=258.38, stdev=111.29 00:10:50.776 clat percentiles (usec): 00:10:50.776 | 1.00th=[ 184], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 210], 00:10:50.776 | 30.00th=[ 219], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 245], 00:10:50.776 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 289], 95.00th=[ 314], 00:10:50.776 | 99.00th=[ 359], 99.50th=[ 433], 99.90th=[ 2606], 99.95th=[ 3064], 00:10:50.776 | 99.99th=[ 3064] 00:10:50.776 bw ( KiB/s): min= 5336, max= 6952, per=43.71%, avg=6144.00, stdev=1142.68, samples=2 00:10:50.777 iops : min= 1334, max= 1738, avg=1536.00, stdev=285.67, samples=2 00:10:50.777 lat (usec) : 250=37.93%, 500=38.47%, 750=23.41% 00:10:50.777 lat (msec) : 4=0.08%, 20=0.04%, 50=0.08% 00:10:50.777 cpu : usr=1.96%, sys=4.71%, ctx=2610, majf=0, minf=2 00:10:50.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.777 issued rwts: total=1074,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.777 job1: (groupid=0, jobs=1): err= 0: pid=164305: Thu Jul 25 13:38:47 2024 00:10:50.777 read: IOPS=508, BW=2035KiB/s (2084kB/s)(2076KiB/1020msec) 00:10:50.777 slat (usec): min=9, max=231, avg=10.97, stdev=11.54 00:10:50.777 clat (usec): min=284, max=41275, avg=1414.60, stdev=6351.21 00:10:50.777 lat (usec): min=374, max=41287, avg=1425.57, stdev=6352.85 00:10:50.777 clat percentiles (usec): 00:10:50.777 | 1.00th=[ 371], 5.00th=[ 375], 10.00th=[ 379], 20.00th=[ 388], 00:10:50.777 | 30.00th=[ 392], 40.00th=[ 392], 50.00th=[ 396], 60.00th=[ 400], 00:10:50.777 | 70.00th=[ 404], 80.00th=[ 408], 90.00th=[ 416], 95.00th=[ 437], 00:10:50.777 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:50.777 | 99.99th=[41157] 00:10:50.777 write: IOPS=1003, BW=4016KiB/s (4112kB/s)(4096KiB/1020msec); 0 zone resets 00:10:50.777 slat (nsec): min=12430, max=48924, avg=14082.61, stdev=2578.89 00:10:50.777 clat (usec): min=143, max=1904, avg=254.77, stdev=96.57 00:10:50.777 lat (usec): min=156, max=1917, avg=268.85, stdev=96.84 00:10:50.777 clat percentiles (usec): 00:10:50.777 | 1.00th=[ 153], 5.00th=[ 165], 10.00th=[ 180], 20.00th=[ 215], 00:10:50.777 | 30.00th=[ 229], 40.00th=[ 241], 50.00th=[ 251], 60.00th=[ 262], 00:10:50.777 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 306], 95.00th=[ 330], 00:10:50.777 | 99.00th=[ 408], 99.50th=[ 478], 99.90th=[ 1795], 99.95th=[ 1909], 00:10:50.777 | 99.99th=[ 1909] 00:10:50.777 bw ( KiB/s): min= 4096, max= 4096, per=29.14%, avg=4096.00, stdev= 0.00, samples=2 00:10:50.777 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:10:50.777 lat (usec) : 250=32.27%, 500=66.36%, 750=0.32% 00:10:50.777 lat (msec) : 2=0.19%, 50=0.84% 00:10:50.777 cpu : usr=1.37%, sys=2.94%, ctx=1544, majf=0, minf=1 00:10:50.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.777 issued rwts: total=519,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.777 job2: (groupid=0, jobs=1): err= 0: pid=164306: Thu Jul 25 13:38:47 2024 00:10:50.777 read: IOPS=20, BW=83.3KiB/s (85.3kB/s)(84.0KiB/1008msec) 00:10:50.777 slat (nsec): min=11744, max=26064, avg=23967.52, stdev=2861.24 00:10:50.777 clat (usec): min=40777, max=41624, avg=40987.24, stdev=163.28 00:10:50.777 lat (usec): min=40802, max=41636, avg=41011.21, stdev=160.72 00:10:50.777 clat percentiles (usec): 00:10:50.777 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:50.777 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:50.777 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:50.777 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:50.777 | 99.99th=[41681] 00:10:50.777 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:10:50.777 slat (nsec): min=12429, max=51735, avg=15120.77, stdev=4297.42 00:10:50.777 clat (usec): min=221, max=504, avg=267.83, stdev=26.87 00:10:50.777 lat (usec): min=235, max=545, avg=282.96, stdev=28.37 00:10:50.777 clat percentiles (usec): 00:10:50.777 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 249], 00:10:50.777 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:10:50.777 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 306], 00:10:50.777 | 99.00th=[ 367], 99.50th=[ 437], 99.90th=[ 506], 99.95th=[ 506], 00:10:50.777 | 99.99th=[ 506] 00:10:50.777 bw ( KiB/s): min= 4096, max= 4096, per=29.14%, avg=4096.00, stdev= 0.00, samples=1 00:10:50.777 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:50.777 lat (usec) : 250=21.39%, 500=74.48%, 750=0.19% 00:10:50.777 lat (msec) : 50=3.94% 00:10:50.777 cpu : usr=0.60%, sys=0.99%, ctx=534, majf=0, minf=1 00:10:50.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.777 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.777 job3: (groupid=0, jobs=1): err= 0: pid=164307: Thu Jul 25 13:38:47 2024 00:10:50.777 read: IOPS=207, BW=831KiB/s (851kB/s)(832KiB/1001msec) 00:10:50.777 slat (nsec): min=9195, max=26227, avg=11283.25, stdev=4517.99 00:10:50.777 clat (usec): min=322, max=42141, avg=4169.84, stdev=11756.27 00:10:50.777 lat (usec): min=332, max=42152, avg=4181.12, stdev=11760.32 00:10:50.777 clat percentiles (usec): 00:10:50.777 | 1.00th=[ 330], 5.00th=[ 338], 10.00th=[ 355], 20.00th=[ 396], 00:10:50.777 | 30.00th=[ 433], 40.00th=[ 453], 50.00th=[ 474], 60.00th=[ 482], 00:10:50.777 | 70.00th=[ 494], 80.00th=[ 502], 90.00th=[ 586], 95.00th=[41157], 00:10:50.777 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:50.777 | 99.99th=[42206] 00:10:50.777 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:50.777 slat (nsec): min=12082, max=40094, avg=13166.29, stdev=1755.89 00:10:50.777 clat (usec): min=206, max=435, avg=238.25, stdev=22.80 00:10:50.777 lat (usec): min=219, max=475, avg=251.42, stdev=23.35 00:10:50.777 clat percentiles (usec): 00:10:50.777 | 1.00th=[ 210], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 221], 00:10:50.777 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 237], 00:10:50.777 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 273], 95.00th=[ 281], 00:10:50.777 | 99.00th=[ 306], 99.50th=[ 318], 99.90th=[ 437], 99.95th=[ 437], 00:10:50.777 | 99.99th=[ 437] 00:10:50.777 bw ( KiB/s): min= 4096, max= 4096, per=29.14%, avg=4096.00, stdev= 0.00, samples=1 00:10:50.777 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:50.777 lat (usec) : 250=56.25%, 500=36.81%, 750=4.31% 00:10:50.777 lat (msec) : 50=2.64% 00:10:50.777 cpu : usr=0.30%, sys=1.20%, ctx=721, majf=0, minf=1 00:10:50.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.777 issued rwts: total=208,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.777 00:10:50.777 Run status group 0 (all jobs): 00:10:50.777 READ: bw=7145KiB/s (7317kB/s), 83.3KiB/s-4212KiB/s (85.3kB/s-4313kB/s), io=7288KiB (7463kB), run=1001-1020msec 00:10:50.777 WRITE: bw=13.7MiB/s (14.4MB/s), 2032KiB/s-6024KiB/s (2081kB/s-6168kB/s), io=14.0MiB (14.7MB), run=1001-1020msec 00:10:50.777 00:10:50.777 Disk stats (read/write): 00:10:50.777 nvme0n1: ios=1074/1229, merge=0/0, ticks=535/292, in_queue=827, util=85.17% 00:10:50.777 nvme0n2: ios=536/1024, merge=0/0, ticks=1387/249, in_queue=1636, util=87.41% 00:10:50.777 nvme0n3: ios=38/512, merge=0/0, ticks=1528/135, in_queue=1663, util=91.49% 00:10:50.777 nvme0n4: ios=74/512, merge=0/0, ticks=1130/120, in_queue=1250, util=94.40% 00:10:50.777 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:50.777 [global] 00:10:50.777 thread=1 00:10:50.777 invalidate=1 00:10:50.777 rw=randwrite 00:10:50.777 time_based=1 00:10:50.777 runtime=1 00:10:50.777 ioengine=libaio 00:10:50.777 direct=1 00:10:50.777 bs=4096 00:10:50.777 iodepth=1 00:10:50.777 norandommap=0 00:10:50.777 numjobs=1 00:10:50.777 00:10:50.777 verify_dump=1 00:10:50.777 verify_backlog=512 00:10:50.777 verify_state_save=0 00:10:50.777 do_verify=1 00:10:50.777 verify=crc32c-intel 00:10:50.777 [job0] 00:10:50.777 filename=/dev/nvme0n1 00:10:50.777 [job1] 00:10:50.777 filename=/dev/nvme0n2 00:10:50.777 [job2] 00:10:50.777 filename=/dev/nvme0n3 00:10:50.777 [job3] 00:10:50.777 filename=/dev/nvme0n4 00:10:50.777 Could not set queue depth (nvme0n1) 00:10:50.777 Could not set queue depth (nvme0n2) 00:10:50.777 Could not set queue depth (nvme0n3) 00:10:50.777 Could not set queue depth (nvme0n4) 00:10:51.036 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:51.036 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:51.036 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:51.036 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:51.036 fio-3.35 00:10:51.036 Starting 4 threads 00:10:52.412 00:10:52.412 job0: (groupid=0, jobs=1): err= 0: pid=164721: Thu Jul 25 13:38:49 2024 00:10:52.412 read: IOPS=1402, BW=5610KiB/s (5745kB/s)(5616KiB/1001msec) 00:10:52.412 slat (nsec): min=8862, max=24876, avg=9794.71, stdev=1293.82 00:10:52.412 clat (usec): min=265, max=2095, avg=405.81, stdev=71.05 00:10:52.412 lat (usec): min=275, max=2106, avg=415.61, stdev=71.10 00:10:52.412 clat percentiles (usec): 00:10:52.412 | 1.00th=[ 285], 5.00th=[ 314], 10.00th=[ 347], 20.00th=[ 379], 00:10:52.412 | 30.00th=[ 392], 40.00th=[ 396], 50.00th=[ 400], 60.00th=[ 408], 00:10:52.412 | 70.00th=[ 412], 80.00th=[ 424], 90.00th=[ 482], 95.00th=[ 498], 00:10:52.412 | 99.00th=[ 553], 99.50th=[ 603], 99.90th=[ 1045], 99.95th=[ 2089], 00:10:52.412 | 99.99th=[ 2089] 00:10:52.412 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:52.412 slat (nsec): min=11452, max=40895, avg=13285.22, stdev=2075.88 00:10:52.412 clat (usec): min=180, max=1145, avg=252.58, stdev=46.25 00:10:52.412 lat (usec): min=193, max=1161, avg=265.86, stdev=46.68 00:10:52.412 clat percentiles (usec): 00:10:52.412 | 1.00th=[ 196], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 219], 00:10:52.412 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 245], 60.00th=[ 260], 00:10:52.412 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 322], 00:10:52.412 | 99.00th=[ 388], 99.50th=[ 441], 99.90th=[ 562], 99.95th=[ 1139], 00:10:52.412 | 99.99th=[ 1139] 00:10:52.412 bw ( KiB/s): min= 8192, max= 8192, per=41.12%, avg=8192.00, stdev= 0.00, samples=1 00:10:52.412 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:52.412 lat (usec) : 250=28.13%, 500=69.63%, 750=2.11%, 1000=0.03% 00:10:52.412 lat (msec) : 2=0.07%, 4=0.03% 00:10:52.412 cpu : usr=3.20%, sys=4.30%, ctx=2944, majf=0, minf=1 00:10:52.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.412 issued rwts: total=1404,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.412 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.412 job1: (groupid=0, jobs=1): err= 0: pid=164722: Thu Jul 25 13:38:49 2024 00:10:52.412 read: IOPS=1220, BW=4883KiB/s (5000kB/s)(4888KiB/1001msec) 00:10:52.412 slat (nsec): min=5103, max=45413, avg=9325.39, stdev=1900.46 00:10:52.412 clat (usec): min=375, max=723, avg=498.79, stdev=34.29 00:10:52.412 lat (usec): min=381, max=733, avg=508.11, stdev=34.57 00:10:52.412 clat percentiles (usec): 00:10:52.412 | 1.00th=[ 400], 5.00th=[ 424], 10.00th=[ 453], 20.00th=[ 490], 00:10:52.412 | 30.00th=[ 498], 40.00th=[ 502], 50.00th=[ 506], 60.00th=[ 510], 00:10:52.412 | 70.00th=[ 515], 80.00th=[ 519], 90.00th=[ 523], 95.00th=[ 529], 00:10:52.412 | 99.00th=[ 603], 99.50th=[ 619], 99.90th=[ 725], 99.95th=[ 725], 00:10:52.412 | 99.99th=[ 725] 00:10:52.412 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:52.412 slat (nsec): min=7180, max=61505, avg=11284.94, stdev=2828.80 00:10:52.412 clat (usec): min=147, max=1611, avg=230.79, stdev=44.97 00:10:52.412 lat (usec): min=155, max=1633, avg=242.07, stdev=45.74 00:10:52.412 clat percentiles (usec): 00:10:52.412 | 1.00th=[ 167], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 212], 00:10:52.412 | 30.00th=[ 219], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 229], 00:10:52.412 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 262], 95.00th=[ 273], 00:10:52.412 | 99.00th=[ 322], 99.50th=[ 359], 99.90th=[ 523], 99.95th=[ 1614], 00:10:52.412 | 99.99th=[ 1614] 00:10:52.413 bw ( KiB/s): min= 7824, max= 7824, per=39.27%, avg=7824.00, stdev= 0.00, samples=1 00:10:52.413 iops : min= 1956, max= 1956, avg=1956.00, stdev= 0.00, samples=1 00:10:52.413 lat (usec) : 250=46.99%, 500=25.74%, 750=27.23% 00:10:52.413 lat (msec) : 2=0.04% 00:10:52.413 cpu : usr=2.80%, sys=4.00%, ctx=2759, majf=0, minf=2 00:10:52.413 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.413 issued rwts: total=1222,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.413 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.413 job2: (groupid=0, jobs=1): err= 0: pid=164723: Thu Jul 25 13:38:49 2024 00:10:52.413 read: IOPS=104, BW=416KiB/s (426kB/s)(428KiB/1028msec) 00:10:52.413 slat (nsec): min=8801, max=25950, avg=12224.71, stdev=6120.29 00:10:52.413 clat (usec): min=361, max=41425, avg=8397.57, stdev=16200.89 00:10:52.413 lat (usec): min=370, max=41437, avg=8409.80, stdev=16206.19 00:10:52.413 clat percentiles (usec): 00:10:52.413 | 1.00th=[ 363], 5.00th=[ 367], 10.00th=[ 375], 20.00th=[ 392], 00:10:52.413 | 30.00th=[ 404], 40.00th=[ 416], 50.00th=[ 429], 60.00th=[ 474], 00:10:52.413 | 70.00th=[ 494], 80.00th=[ 627], 90.00th=[41157], 95.00th=[41157], 00:10:52.413 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:52.413 | 99.99th=[41681] 00:10:52.413 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:10:52.413 slat (nsec): min=11419, max=40134, avg=12220.44, stdev=1623.67 00:10:52.413 clat (usec): min=186, max=456, avg=234.51, stdev=27.00 00:10:52.413 lat (usec): min=198, max=496, avg=246.73, stdev=27.52 00:10:52.413 clat percentiles (usec): 00:10:52.413 | 1.00th=[ 192], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 215], 00:10:52.413 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 235], 00:10:52.413 | 70.00th=[ 241], 80.00th=[ 255], 90.00th=[ 273], 95.00th=[ 281], 00:10:52.413 | 99.00th=[ 310], 99.50th=[ 326], 99.90th=[ 457], 99.95th=[ 457], 00:10:52.413 | 99.99th=[ 457] 00:10:52.413 bw ( KiB/s): min= 4096, max= 4096, per=20.56%, avg=4096.00, stdev= 0.00, samples=1 00:10:52.413 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:52.413 lat (usec) : 250=63.65%, 500=31.66%, 750=1.29% 00:10:52.413 lat (msec) : 50=3.39% 00:10:52.413 cpu : usr=0.39%, sys=0.78%, ctx=620, majf=0, minf=1 00:10:52.413 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.413 issued rwts: total=107,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.413 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.413 job3: (groupid=0, jobs=1): err= 0: pid=164724: Thu Jul 25 13:38:49 2024 00:10:52.413 read: IOPS=1446, BW=5786KiB/s (5925kB/s)(5792KiB/1001msec) 00:10:52.413 slat (nsec): min=8942, max=33573, avg=9793.84, stdev=1436.09 00:10:52.413 clat (usec): min=260, max=1769, avg=408.26, stdev=55.63 00:10:52.413 lat (usec): min=269, max=1781, avg=418.06, stdev=55.79 00:10:52.413 clat percentiles (usec): 00:10:52.413 | 1.00th=[ 326], 5.00th=[ 355], 10.00th=[ 375], 20.00th=[ 388], 00:10:52.413 | 30.00th=[ 392], 40.00th=[ 396], 50.00th=[ 400], 60.00th=[ 408], 00:10:52.413 | 70.00th=[ 412], 80.00th=[ 420], 90.00th=[ 449], 95.00th=[ 486], 00:10:52.413 | 99.00th=[ 562], 99.50th=[ 594], 99.90th=[ 930], 99.95th=[ 1762], 00:10:52.413 | 99.99th=[ 1762] 00:10:52.413 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:52.413 slat (nsec): min=11898, max=38630, avg=13258.70, stdev=1651.66 00:10:52.413 clat (usec): min=159, max=704, avg=238.17, stdev=39.56 00:10:52.413 lat (usec): min=172, max=720, avg=251.43, stdev=39.66 00:10:52.413 clat percentiles (usec): 00:10:52.413 | 1.00th=[ 174], 5.00th=[ 196], 10.00th=[ 206], 20.00th=[ 215], 00:10:52.413 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 235], 00:10:52.413 | 70.00th=[ 245], 80.00th=[ 258], 90.00th=[ 277], 95.00th=[ 322], 00:10:52.413 | 99.00th=[ 388], 99.50th=[ 392], 99.90th=[ 482], 99.95th=[ 701], 00:10:52.413 | 99.99th=[ 701] 00:10:52.413 bw ( KiB/s): min= 8192, max= 8192, per=41.12%, avg=8192.00, stdev= 0.00, samples=1 00:10:52.413 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:52.413 lat (usec) : 250=39.31%, 500=59.05%, 750=1.54%, 1000=0.07% 00:10:52.413 lat (msec) : 2=0.03% 00:10:52.413 cpu : usr=1.90%, sys=4.20%, ctx=2988, majf=0, minf=1 00:10:52.413 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.413 issued rwts: total=1448,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.413 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.413 00:10:52.413 Run status group 0 (all jobs): 00:10:52.413 READ: bw=15.9MiB/s (16.7MB/s), 416KiB/s-5786KiB/s (426kB/s-5925kB/s), io=16.3MiB (17.1MB), run=1001-1028msec 00:10:52.413 WRITE: bw=19.5MiB/s (20.4MB/s), 1992KiB/s-6138KiB/s (2040kB/s-6285kB/s), io=20.0MiB (21.0MB), run=1001-1028msec 00:10:52.413 00:10:52.413 Disk stats (read/write): 00:10:52.413 nvme0n1: ios=1059/1479, merge=0/0, ticks=1331/358, in_queue=1689, util=97.39% 00:10:52.413 nvme0n2: ios=1060/1228, merge=0/0, ticks=577/284, in_queue=861, util=90.18% 00:10:52.413 nvme0n3: ios=131/512, merge=0/0, ticks=753/115, in_queue=868, util=89.27% 00:10:52.413 nvme0n4: ios=1075/1536, merge=0/0, ticks=1358/352, in_queue=1710, util=100.00% 00:10:52.413 13:38:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:52.413 [global] 00:10:52.413 thread=1 00:10:52.413 invalidate=1 00:10:52.413 rw=write 00:10:52.413 time_based=1 00:10:52.413 runtime=1 00:10:52.413 ioengine=libaio 00:10:52.413 direct=1 00:10:52.413 bs=4096 00:10:52.413 iodepth=128 00:10:52.413 norandommap=0 00:10:52.413 numjobs=1 00:10:52.413 00:10:52.413 verify_dump=1 00:10:52.413 verify_backlog=512 00:10:52.413 verify_state_save=0 00:10:52.413 do_verify=1 00:10:52.413 verify=crc32c-intel 00:10:52.413 [job0] 00:10:52.413 filename=/dev/nvme0n1 00:10:52.413 [job1] 00:10:52.413 filename=/dev/nvme0n2 00:10:52.413 [job2] 00:10:52.413 filename=/dev/nvme0n3 00:10:52.413 [job3] 00:10:52.413 filename=/dev/nvme0n4 00:10:52.413 Could not set queue depth (nvme0n1) 00:10:52.413 Could not set queue depth (nvme0n2) 00:10:52.413 Could not set queue depth (nvme0n3) 00:10:52.413 Could not set queue depth (nvme0n4) 00:10:52.672 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:52.672 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:52.672 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:52.672 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:52.672 fio-3.35 00:10:52.672 Starting 4 threads 00:10:54.050 00:10:54.050 job0: (groupid=0, jobs=1): err= 0: pid=165149: Thu Jul 25 13:38:50 2024 00:10:54.050 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:10:54.050 slat (usec): min=2, max=14406, avg=108.38, stdev=753.05 00:10:54.050 clat (usec): min=1856, max=73060, avg=13732.94, stdev=8467.60 00:10:54.050 lat (usec): min=1865, max=73075, avg=13841.32, stdev=8543.25 00:10:54.050 clat percentiles (usec): 00:10:54.050 | 1.00th=[ 3458], 5.00th=[ 8455], 10.00th=[ 9503], 20.00th=[ 9896], 00:10:54.050 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11207], 60.00th=[12125], 00:10:54.050 | 70.00th=[13304], 80.00th=[16057], 90.00th=[20579], 95.00th=[28181], 00:10:54.050 | 99.00th=[63701], 99.50th=[67634], 99.90th=[72877], 99.95th=[72877], 00:10:54.050 | 99.99th=[72877] 00:10:54.050 write: IOPS=4191, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1005msec); 0 zone resets 00:10:54.050 slat (usec): min=3, max=6143, avg=118.46, stdev=576.86 00:10:54.050 clat (usec): min=1967, max=67886, avg=16860.76, stdev=12757.90 00:10:54.050 lat (usec): min=1984, max=67895, avg=16979.22, stdev=12838.21 00:10:54.050 clat percentiles (usec): 00:10:54.050 | 1.00th=[ 4490], 5.00th=[ 7701], 10.00th=[ 8586], 20.00th=[ 9503], 00:10:54.050 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[10814], 60.00th=[11469], 00:10:54.050 | 70.00th=[14877], 80.00th=[20841], 90.00th=[40109], 95.00th=[46400], 00:10:54.050 | 99.00th=[58459], 99.50th=[59507], 99.90th=[62129], 99.95th=[62129], 00:10:54.050 | 99.99th=[67634] 00:10:54.050 bw ( KiB/s): min=13640, max=19184, per=22.15%, avg=16412.00, stdev=3920.20, samples=2 00:10:54.050 iops : min= 3410, max= 4796, avg=4103.00, stdev=980.05, samples=2 00:10:54.050 lat (msec) : 2=0.24%, 4=1.59%, 10=25.97%, 20=54.01%, 50=15.84% 00:10:54.050 lat (msec) : 100=2.35% 00:10:54.050 cpu : usr=4.68%, sys=6.57%, ctx=399, majf=0, minf=1 00:10:54.051 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:54.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:54.051 issued rwts: total=4096,4212,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.051 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:54.051 job1: (groupid=0, jobs=1): err= 0: pid=165150: Thu Jul 25 13:38:50 2024 00:10:54.051 read: IOPS=6623, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1004msec) 00:10:54.051 slat (nsec): min=1720, max=6901.4k, avg=56730.59, stdev=443689.21 00:10:54.051 clat (usec): min=1146, max=39313, avg=10221.99, stdev=3574.56 00:10:54.051 lat (usec): min=1695, max=39316, avg=10278.72, stdev=3584.31 00:10:54.051 clat percentiles (usec): 00:10:54.051 | 1.00th=[ 3687], 5.00th=[ 5145], 10.00th=[ 6325], 20.00th=[ 8356], 00:10:54.051 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10421], 00:10:54.051 | 70.00th=[11076], 80.00th=[11731], 90.00th=[13173], 95.00th=[15270], 00:10:54.051 | 99.00th=[23987], 99.50th=[32900], 99.90th=[35390], 99.95th=[35390], 00:10:54.051 | 99.99th=[39060] 00:10:54.051 write: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec); 0 zone resets 00:10:54.051 slat (usec): min=2, max=9219, avg=65.63, stdev=466.56 00:10:54.051 clat (usec): min=657, max=26844, avg=8934.21, stdev=2735.98 00:10:54.051 lat (usec): min=1073, max=26848, avg=8999.84, stdev=2745.80 00:10:54.051 clat percentiles (usec): 00:10:54.051 | 1.00th=[ 2606], 5.00th=[ 4948], 10.00th=[ 5604], 20.00th=[ 6521], 00:10:54.051 | 30.00th=[ 7635], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9634], 00:10:54.051 | 70.00th=[10028], 80.00th=[10945], 90.00th=[12518], 95.00th=[13173], 00:10:54.051 | 99.00th=[15533], 99.50th=[15795], 99.90th=[22414], 99.95th=[26870], 00:10:54.051 | 99.99th=[26870] 00:10:54.051 bw ( KiB/s): min=24576, max=28672, per=35.93%, avg=26624.00, stdev=2896.31, samples=2 00:10:54.051 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:10:54.051 lat (usec) : 750=0.01% 00:10:54.051 lat (msec) : 2=0.29%, 4=1.56%, 10=58.03%, 20=39.25%, 50=0.86% 00:10:54.051 cpu : usr=5.58%, sys=8.28%, ctx=404, majf=0, minf=1 00:10:54.051 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:54.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:54.051 issued rwts: total=6650,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.051 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:54.051 job2: (groupid=0, jobs=1): err= 0: pid=165151: Thu Jul 25 13:38:50 2024 00:10:54.051 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:10:54.051 slat (usec): min=2, max=12299, avg=132.44, stdev=875.20 00:10:54.051 clat (usec): min=8472, max=37934, avg=18616.66, stdev=5927.96 00:10:54.051 lat (usec): min=8487, max=38560, avg=18749.10, stdev=5995.86 00:10:54.051 clat percentiles (usec): 00:10:54.051 | 1.00th=[10814], 5.00th=[11600], 10.00th=[11994], 20.00th=[13304], 00:10:54.051 | 30.00th=[14222], 40.00th=[15664], 50.00th=[16909], 60.00th=[20317], 00:10:54.051 | 70.00th=[21627], 80.00th=[23200], 90.00th=[26608], 95.00th=[29754], 00:10:54.051 | 99.00th=[35914], 99.50th=[36439], 99.90th=[37487], 99.95th=[38011], 00:10:54.051 | 99.99th=[38011] 00:10:54.051 write: IOPS=3145, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1004msec); 0 zone resets 00:10:54.051 slat (usec): min=3, max=11524, avg=170.44, stdev=858.10 00:10:54.051 clat (usec): min=722, max=56429, avg=22057.91, stdev=12295.24 00:10:54.051 lat (usec): min=730, max=57967, avg=22228.35, stdev=12372.10 00:10:54.051 clat percentiles (usec): 00:10:54.051 | 1.00th=[ 6063], 5.00th=[ 9765], 10.00th=[11207], 20.00th=[12256], 00:10:54.051 | 30.00th=[13566], 40.00th=[16319], 50.00th=[18744], 60.00th=[20579], 00:10:54.051 | 70.00th=[22152], 80.00th=[29754], 90.00th=[44303], 95.00th=[50070], 00:10:54.051 | 99.00th=[54789], 99.50th=[56361], 99.90th=[56361], 99.95th=[56361], 00:10:54.051 | 99.99th=[56361] 00:10:54.051 bw ( KiB/s): min= 9600, max=15032, per=16.62%, avg=12316.00, stdev=3841.00, samples=2 00:10:54.051 iops : min= 2400, max= 3758, avg=3079.00, stdev=960.25, samples=2 00:10:54.051 lat (usec) : 750=0.05% 00:10:54.051 lat (msec) : 2=0.06%, 4=0.29%, 10=2.65%, 20=53.58%, 50=40.90% 00:10:54.051 lat (msec) : 100=2.47% 00:10:54.051 cpu : usr=2.69%, sys=5.88%, ctx=345, majf=0, minf=1 00:10:54.051 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:54.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:54.051 issued rwts: total=3072,3158,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.051 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:54.051 job3: (groupid=0, jobs=1): err= 0: pid=165152: Thu Jul 25 13:38:50 2024 00:10:54.051 read: IOPS=4291, BW=16.8MiB/s (17.6MB/s)(16.9MiB/1006msec) 00:10:54.051 slat (usec): min=2, max=13730, avg=113.29, stdev=749.81 00:10:54.051 clat (usec): min=1860, max=38702, avg=15434.03, stdev=5952.08 00:10:54.051 lat (usec): min=5044, max=38710, avg=15547.32, stdev=6000.89 00:10:54.051 clat percentiles (usec): 00:10:54.051 | 1.00th=[ 8029], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[11076], 00:10:54.051 | 30.00th=[11600], 40.00th=[12387], 50.00th=[13042], 60.00th=[15008], 00:10:54.051 | 70.00th=[16581], 80.00th=[19792], 90.00th=[24249], 95.00th=[29492], 00:10:54.051 | 99.00th=[32375], 99.50th=[35390], 99.90th=[38536], 99.95th=[38536], 00:10:54.051 | 99.99th=[38536] 00:10:54.051 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:10:54.051 slat (usec): min=3, max=20179, avg=102.31, stdev=783.82 00:10:54.051 clat (usec): min=2077, max=33873, avg=13164.73, stdev=4586.36 00:10:54.051 lat (usec): min=2095, max=33886, avg=13267.04, stdev=4641.52 00:10:54.051 clat percentiles (usec): 00:10:54.051 | 1.00th=[ 4359], 5.00th=[ 6652], 10.00th=[ 8586], 20.00th=[10421], 00:10:54.051 | 30.00th=[10945], 40.00th=[11600], 50.00th=[12256], 60.00th=[13435], 00:10:54.051 | 70.00th=[14091], 80.00th=[14746], 90.00th=[20579], 95.00th=[23200], 00:10:54.051 | 99.00th=[25822], 99.50th=[30016], 99.90th=[32113], 99.95th=[32113], 00:10:54.051 | 99.99th=[33817] 00:10:54.051 bw ( KiB/s): min=18400, max=18464, per=24.88%, avg=18432.00, stdev=45.25, samples=2 00:10:54.051 iops : min= 4600, max= 4616, avg=4608.00, stdev=11.31, samples=2 00:10:54.051 lat (msec) : 2=0.01%, 4=0.27%, 10=13.49%, 20=70.23%, 50=16.00% 00:10:54.051 cpu : usr=5.67%, sys=6.97%, ctx=296, majf=0, minf=1 00:10:54.051 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:54.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:54.051 issued rwts: total=4317,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.051 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:54.051 00:10:54.051 Run status group 0 (all jobs): 00:10:54.051 READ: bw=70.4MiB/s (73.8MB/s), 12.0MiB/s-25.9MiB/s (12.5MB/s-27.1MB/s), io=70.8MiB (74.3MB), run=1004-1006msec 00:10:54.051 WRITE: bw=72.4MiB/s (75.9MB/s), 12.3MiB/s-25.9MiB/s (12.9MB/s-27.2MB/s), io=72.8MiB (76.3MB), run=1004-1006msec 00:10:54.051 00:10:54.051 Disk stats (read/write): 00:10:54.051 nvme0n1: ios=3344/3584, merge=0/0, ticks=29475/44587, in_queue=74062, util=96.79% 00:10:54.051 nvme0n2: ios=5499/5632, merge=0/0, ticks=50357/41820, in_queue=92177, util=86.28% 00:10:54.051 nvme0n3: ios=2533/2560, merge=0/0, ticks=25566/28121, in_queue=53687, util=96.17% 00:10:54.051 nvme0n4: ios=3572/3584, merge=0/0, ticks=37890/34365, in_queue=72255, util=95.90% 00:10:54.051 13:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:54.051 [global] 00:10:54.051 thread=1 00:10:54.051 invalidate=1 00:10:54.051 rw=randwrite 00:10:54.051 time_based=1 00:10:54.051 runtime=1 00:10:54.051 ioengine=libaio 00:10:54.051 direct=1 00:10:54.051 bs=4096 00:10:54.051 iodepth=128 00:10:54.051 norandommap=0 00:10:54.051 numjobs=1 00:10:54.051 00:10:54.051 verify_dump=1 00:10:54.051 verify_backlog=512 00:10:54.051 verify_state_save=0 00:10:54.051 do_verify=1 00:10:54.051 verify=crc32c-intel 00:10:54.051 [job0] 00:10:54.051 filename=/dev/nvme0n1 00:10:54.051 [job1] 00:10:54.051 filename=/dev/nvme0n2 00:10:54.051 [job2] 00:10:54.051 filename=/dev/nvme0n3 00:10:54.051 [job3] 00:10:54.051 filename=/dev/nvme0n4 00:10:54.051 Could not set queue depth (nvme0n1) 00:10:54.051 Could not set queue depth (nvme0n2) 00:10:54.051 Could not set queue depth (nvme0n3) 00:10:54.051 Could not set queue depth (nvme0n4) 00:10:54.310 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:54.310 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:54.310 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:54.310 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:54.310 fio-3.35 00:10:54.310 Starting 4 threads 00:10:55.688 00:10:55.688 job0: (groupid=0, jobs=1): err= 0: pid=165572: Thu Jul 25 13:38:52 2024 00:10:55.688 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:10:55.688 slat (nsec): min=1797, max=9690.8k, avg=118557.65, stdev=737744.37 00:10:55.688 clat (usec): min=6276, max=56418, avg=16058.98, stdev=6211.79 00:10:55.688 lat (usec): min=6283, max=56424, avg=16177.54, stdev=6240.41 00:10:55.688 clat percentiles (usec): 00:10:55.688 | 1.00th=[ 8029], 5.00th=[10159], 10.00th=[10683], 20.00th=[11207], 00:10:55.689 | 30.00th=[11731], 40.00th=[11863], 50.00th=[13829], 60.00th=[15664], 00:10:55.689 | 70.00th=[18220], 80.00th=[21890], 90.00th=[25035], 95.00th=[27132], 00:10:55.689 | 99.00th=[30016], 99.50th=[33424], 99.90th=[56361], 99.95th=[56361], 00:10:55.689 | 99.99th=[56361] 00:10:55.689 write: IOPS=4159, BW=16.2MiB/s (17.0MB/s)(16.4MiB/1009msec); 0 zone resets 00:10:55.689 slat (usec): min=2, max=6612, avg=111.41, stdev=575.38 00:10:55.689 clat (usec): min=2207, max=33406, avg=14731.20, stdev=6109.83 00:10:55.689 lat (usec): min=4501, max=33410, avg=14842.62, stdev=6149.15 00:10:55.689 clat percentiles (usec): 00:10:55.689 | 1.00th=[ 4686], 5.00th=[ 7111], 10.00th=[ 8225], 20.00th=[10683], 00:10:55.689 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11994], 60.00th=[12911], 00:10:55.689 | 70.00th=[17433], 80.00th=[21365], 90.00th=[24249], 95.00th=[26346], 00:10:55.689 | 99.00th=[30278], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:10:55.689 | 99.99th=[33424] 00:10:55.689 bw ( KiB/s): min=12288, max=20480, per=22.30%, avg=16384.00, stdev=5792.62, samples=2 00:10:55.689 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:10:55.689 lat (msec) : 4=0.01%, 10=10.99%, 20=64.66%, 50=24.12%, 100=0.23% 00:10:55.689 cpu : usr=3.87%, sys=6.45%, ctx=354, majf=0, minf=1 00:10:55.689 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:55.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:55.689 issued rwts: total=4096,4197,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.689 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:55.689 job1: (groupid=0, jobs=1): err= 0: pid=165573: Thu Jul 25 13:38:52 2024 00:10:55.689 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:10:55.689 slat (usec): min=2, max=7328, avg=84.41, stdev=453.94 00:10:55.689 clat (usec): min=5649, max=23597, avg=11320.68, stdev=1958.54 00:10:55.689 lat (usec): min=5654, max=23605, avg=11405.10, stdev=1977.21 00:10:55.689 clat percentiles (usec): 00:10:55.689 | 1.00th=[ 7832], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10159], 00:10:55.689 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338], 00:10:55.689 | 70.00th=[11731], 80.00th=[12387], 90.00th=[13566], 95.00th=[14222], 00:10:55.689 | 99.00th=[20841], 99.50th=[21627], 99.90th=[23462], 99.95th=[23725], 00:10:55.689 | 99.99th=[23725] 00:10:55.689 write: IOPS=5672, BW=22.2MiB/s (23.2MB/s)(22.2MiB/1002msec); 0 zone resets 00:10:55.689 slat (usec): min=2, max=6038, avg=85.43, stdev=457.15 00:10:55.689 clat (usec): min=277, max=21692, avg=11067.04, stdev=2101.02 00:10:55.689 lat (usec): min=3454, max=23114, avg=11152.47, stdev=2123.21 00:10:55.689 clat percentiles (usec): 00:10:55.689 | 1.00th=[ 6980], 5.00th=[ 8356], 10.00th=[ 9241], 20.00th=[ 9896], 00:10:55.689 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[11076], 00:10:55.689 | 70.00th=[11731], 80.00th=[12125], 90.00th=[13042], 95.00th=[14615], 00:10:55.689 | 99.00th=[19268], 99.50th=[20055], 99.90th=[21103], 99.95th=[21103], 00:10:55.689 | 99.99th=[21627] 00:10:55.689 bw ( KiB/s): min=21400, max=23656, per=30.67%, avg=22528.00, stdev=1595.23, samples=2 00:10:55.689 iops : min= 5350, max= 5914, avg=5632.00, stdev=398.81, samples=2 00:10:55.689 lat (usec) : 500=0.01% 00:10:55.689 lat (msec) : 4=0.40%, 10=21.35%, 20=77.47%, 50=0.78% 00:10:55.689 cpu : usr=5.29%, sys=7.29%, ctx=505, majf=0, minf=1 00:10:55.689 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:55.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:55.689 issued rwts: total=5632,5684,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.689 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:55.689 job2: (groupid=0, jobs=1): err= 0: pid=165574: Thu Jul 25 13:38:52 2024 00:10:55.689 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:10:55.689 slat (usec): min=2, max=6915, avg=128.18, stdev=675.76 00:10:55.689 clat (usec): min=7231, max=28050, avg=16599.05, stdev=3581.76 00:10:55.689 lat (usec): min=7243, max=28064, avg=16727.23, stdev=3637.61 00:10:55.689 clat percentiles (usec): 00:10:55.689 | 1.00th=[10028], 5.00th=[11469], 10.00th=[11731], 20.00th=[13435], 00:10:55.689 | 30.00th=[14091], 40.00th=[15401], 50.00th=[16909], 60.00th=[17695], 00:10:55.689 | 70.00th=[18482], 80.00th=[20055], 90.00th=[21365], 95.00th=[22938], 00:10:55.689 | 99.00th=[25035], 99.50th=[25035], 99.90th=[26608], 99.95th=[27132], 00:10:55.689 | 99.99th=[28181] 00:10:55.689 write: IOPS=3834, BW=15.0MiB/s (15.7MB/s)(15.1MiB/1006msec); 0 zone resets 00:10:55.689 slat (usec): min=2, max=11992, avg=132.07, stdev=680.60 00:10:55.689 clat (usec): min=4788, max=56993, avg=17536.92, stdev=7888.94 00:10:55.689 lat (usec): min=6223, max=56997, avg=17668.99, stdev=7939.05 00:10:55.689 clat percentiles (usec): 00:10:55.689 | 1.00th=[ 8455], 5.00th=[10421], 10.00th=[11207], 20.00th=[11731], 00:10:55.689 | 30.00th=[13042], 40.00th=[13960], 50.00th=[15401], 60.00th=[17171], 00:10:55.689 | 70.00th=[18482], 80.00th=[20579], 90.00th=[27919], 95.00th=[33424], 00:10:55.689 | 99.00th=[51643], 99.50th=[54264], 99.90th=[56886], 99.95th=[56886], 00:10:55.689 | 99.99th=[56886] 00:10:55.689 bw ( KiB/s): min=12032, max=17843, per=20.33%, avg=14937.50, stdev=4109.00, samples=2 00:10:55.689 iops : min= 3008, max= 4460, avg=3734.00, stdev=1026.72, samples=2 00:10:55.689 lat (msec) : 10=2.46%, 20=76.08%, 50=20.73%, 100=0.73% 00:10:55.689 cpu : usr=3.48%, sys=6.17%, ctx=421, majf=0, minf=1 00:10:55.689 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:55.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:55.689 issued rwts: total=3584,3858,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.689 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:55.689 job3: (groupid=0, jobs=1): err= 0: pid=165575: Thu Jul 25 13:38:52 2024 00:10:55.689 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:10:55.689 slat (usec): min=2, max=5761, avg=87.96, stdev=500.67 00:10:55.689 clat (usec): min=1584, max=51719, avg=11717.67, stdev=3136.22 00:10:55.689 lat (usec): min=1873, max=51736, avg=11805.63, stdev=3153.00 00:10:55.689 clat percentiles (usec): 00:10:55.689 | 1.00th=[ 6259], 5.00th=[ 8291], 10.00th=[ 9110], 20.00th=[10290], 00:10:55.689 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11731], 60.00th=[11994], 00:10:55.689 | 70.00th=[12256], 80.00th=[12911], 90.00th=[13960], 95.00th=[14877], 00:10:55.689 | 99.00th=[18482], 99.50th=[27395], 99.90th=[51643], 99.95th=[51643], 00:10:55.689 | 99.99th=[51643] 00:10:55.689 write: IOPS=4771, BW=18.6MiB/s (19.5MB/s)(18.7MiB/1004msec); 0 zone resets 00:10:55.689 slat (usec): min=2, max=43316, avg=116.45, stdev=853.34 00:10:55.689 clat (usec): min=333, max=80794, avg=15258.68, stdev=12817.56 00:10:55.689 lat (usec): min=4129, max=80808, avg=15375.13, stdev=12894.82 00:10:55.689 clat percentiles (usec): 00:10:55.689 | 1.00th=[ 4883], 5.00th=[ 7701], 10.00th=[ 8979], 20.00th=[ 9896], 00:10:55.689 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11469], 60.00th=[12387], 00:10:55.689 | 70.00th=[12911], 80.00th=[13960], 90.00th=[22938], 95.00th=[49546], 00:10:55.689 | 99.00th=[72877], 99.50th=[73925], 99.90th=[81265], 99.95th=[81265], 00:10:55.689 | 99.99th=[81265] 00:10:55.689 bw ( KiB/s): min=17498, max=19840, per=25.41%, avg=18669.00, stdev=1656.04, samples=2 00:10:55.689 iops : min= 4374, max= 4960, avg=4667.00, stdev=414.36, samples=2 00:10:55.689 lat (usec) : 500=0.01% 00:10:55.689 lat (msec) : 2=0.10%, 4=0.18%, 10=19.60%, 20=74.04%, 50=3.68% 00:10:55.689 lat (msec) : 100=2.39% 00:10:55.689 cpu : usr=5.38%, sys=5.78%, ctx=505, majf=0, minf=1 00:10:55.689 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:55.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:55.689 issued rwts: total=4608,4791,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.689 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:55.689 00:10:55.689 Run status group 0 (all jobs): 00:10:55.689 READ: bw=69.4MiB/s (72.7MB/s), 13.9MiB/s-22.0MiB/s (14.6MB/s-23.0MB/s), io=70.0MiB (73.4MB), run=1002-1009msec 00:10:55.689 WRITE: bw=71.7MiB/s (75.2MB/s), 15.0MiB/s-22.2MiB/s (15.7MB/s-23.2MB/s), io=72.4MiB (75.9MB), run=1002-1009msec 00:10:55.689 00:10:55.689 Disk stats (read/write): 00:10:55.689 nvme0n1: ios=3390/3584, merge=0/0, ticks=27493/21015, in_queue=48508, util=91.48% 00:10:55.689 nvme0n2: ios=4630/4854, merge=0/0, ticks=17856/16311, in_queue=34167, util=97.55% 00:10:55.689 nvme0n3: ios=3094/3489, merge=0/0, ticks=16990/16993, in_queue=33983, util=96.81% 00:10:55.689 nvme0n4: ios=3643/3864, merge=0/0, ticks=22671/26692, in_queue=49363, util=95.69% 00:10:55.689 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:55.689 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=165842 00:10:55.689 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:55.689 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:55.689 [global] 00:10:55.689 thread=1 00:10:55.689 invalidate=1 00:10:55.689 rw=read 00:10:55.689 time_based=1 00:10:55.689 runtime=10 00:10:55.689 ioengine=libaio 00:10:55.689 direct=1 00:10:55.689 bs=4096 00:10:55.689 iodepth=1 00:10:55.689 norandommap=1 00:10:55.689 numjobs=1 00:10:55.689 00:10:55.689 [job0] 00:10:55.689 filename=/dev/nvme0n1 00:10:55.689 [job1] 00:10:55.689 filename=/dev/nvme0n2 00:10:55.689 [job2] 00:10:55.689 filename=/dev/nvme0n3 00:10:55.689 [job3] 00:10:55.689 filename=/dev/nvme0n4 00:10:55.689 Could not set queue depth (nvme0n1) 00:10:55.689 Could not set queue depth (nvme0n2) 00:10:55.689 Could not set queue depth (nvme0n3) 00:10:55.689 Could not set queue depth (nvme0n4) 00:10:55.949 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.949 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.949 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.949 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.949 fio-3.35 00:10:55.949 Starting 4 threads 00:10:58.484 13:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:58.743 13:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:58.743 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=266240, buflen=4096 00:10:58.743 fio: pid=165997, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:59.001 13:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:59.001 13:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:59.001 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=278528, buflen=4096 00:10:59.001 fio: pid=165996, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:59.260 13:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:59.260 13:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:59.260 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=303104, buflen=4096 00:10:59.260 fio: pid=165994, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:59.260 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=315392, buflen=4096 00:10:59.260 fio: pid=165995, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:59.260 13:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:59.260 13:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:59.565 00:10:59.565 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=165994: Thu Jul 25 13:38:56 2024 00:10:59.565 read: IOPS=25, BW=98.7KiB/s (101kB/s)(296KiB/2999msec) 00:10:59.565 slat (usec): min=10, max=6546, avg=144.96, stdev=802.56 00:10:59.565 clat (usec): min=719, max=42767, avg=40092.11, stdev=6621.06 00:10:59.565 lat (usec): min=750, max=48895, avg=40238.68, stdev=6705.82 00:10:59.565 clat percentiles (usec): 00:10:59.565 | 1.00th=[ 717], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:59.565 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:59.565 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:10:59.565 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:59.565 | 99.99th=[42730] 00:10:59.565 bw ( KiB/s): min= 96, max= 104, per=27.54%, avg=99.20, stdev= 4.38, samples=5 00:10:59.565 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:10:59.565 lat (usec) : 750=2.67% 00:10:59.565 lat (msec) : 50=96.00% 00:10:59.565 cpu : usr=0.10%, sys=0.00%, ctx=79, majf=0, minf=1 00:10:59.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.565 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.565 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.565 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=165995: Thu Jul 25 13:38:56 2024 00:10:59.565 read: IOPS=24, BW=97.5KiB/s (99.8kB/s)(308KiB/3160msec) 00:10:59.565 slat (usec): min=9, max=5645, avg=156.33, stdev=819.78 00:10:59.565 clat (usec): min=749, max=43196, avg=40604.79, stdev=4621.39 00:10:59.565 lat (usec): min=785, max=47858, avg=40762.81, stdev=4738.52 00:10:59.565 clat percentiles (usec): 00:10:59.565 | 1.00th=[ 750], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:59.565 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:59.565 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:10:59.565 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:59.565 | 99.99th=[43254] 00:10:59.565 bw ( KiB/s): min= 96, max= 104, per=27.26%, avg=98.00, stdev= 3.35, samples=6 00:10:59.565 iops : min= 24, max= 26, avg=24.50, stdev= 0.84, samples=6 00:10:59.565 lat (usec) : 750=1.28% 00:10:59.565 lat (msec) : 50=97.44% 00:10:59.565 cpu : usr=0.13%, sys=0.00%, ctx=80, majf=0, minf=1 00:10:59.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.565 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.565 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.565 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=165996: Thu Jul 25 13:38:56 2024 00:10:59.565 read: IOPS=24, BW=97.6KiB/s (100.0kB/s)(272KiB/2786msec) 00:10:59.565 slat (nsec): min=10204, max=33682, avg=24103.12, stdev=3146.15 00:10:59.565 clat (usec): min=791, max=45049, avg=40634.00, stdev=4950.37 00:10:59.565 lat (usec): min=825, max=45059, avg=40658.11, stdev=4949.11 00:10:59.565 clat percentiles (usec): 00:10:59.565 | 1.00th=[ 791], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:59.565 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:59.565 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:10:59.565 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:10:59.565 | 99.99th=[44827] 00:10:59.565 bw ( KiB/s): min= 96, max= 104, per=26.98%, avg=97.60, stdev= 3.58, samples=5 00:10:59.565 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:10:59.565 lat (usec) : 1000=1.45% 00:10:59.565 lat (msec) : 50=97.10% 00:10:59.565 cpu : usr=0.14%, sys=0.00%, ctx=69, majf=0, minf=1 00:10:59.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.565 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.565 issued rwts: total=69,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.565 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=165997: Thu Jul 25 13:38:56 2024 00:10:59.565 read: IOPS=25, BW=99.5KiB/s (102kB/s)(260KiB/2614msec) 00:10:59.565 slat (nsec): min=13137, max=36765, avg=24159.50, stdev=2270.47 00:10:59.565 clat (usec): min=686, max=41974, avg=39865.10, stdev=7037.45 00:10:59.565 lat (usec): min=712, max=42000, avg=39889.25, stdev=7036.20 00:10:59.565 clat percentiles (usec): 00:10:59.565 | 1.00th=[ 685], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:59.565 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:59.565 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:10:59.565 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:59.565 | 99.99th=[42206] 00:10:59.565 bw ( KiB/s): min= 96, max= 104, per=27.54%, avg=99.20, stdev= 4.38, samples=5 00:10:59.565 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:10:59.565 lat (usec) : 750=3.03% 00:10:59.565 lat (msec) : 50=95.45% 00:10:59.565 cpu : usr=0.00%, sys=0.15%, ctx=66, majf=0, minf=2 00:10:59.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.565 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.565 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.565 00:10:59.565 Run status group 0 (all jobs): 00:10:59.565 READ: bw=359KiB/s (368kB/s), 97.5KiB/s-99.5KiB/s (99.8kB/s-102kB/s), io=1136KiB (1163kB), run=2614-3160msec 00:10:59.565 00:10:59.565 Disk stats (read/write): 00:10:59.565 nvme0n1: ios=100/0, merge=0/0, ticks=3594/0, in_queue=3594, util=100.00% 00:10:59.565 nvme0n2: ios=75/0, merge=0/0, ticks=3046/0, in_queue=3046, util=95.11% 00:10:59.565 nvme0n3: ios=63/0, merge=0/0, ticks=2560/0, in_queue=2560, util=95.92% 00:10:59.565 nvme0n4: ios=64/0, merge=0/0, ticks=2552/0, in_queue=2552, util=96.39% 00:10:59.565 13:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:59.565 13:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:59.824 13:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:59.824 13:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:59.824 13:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:59.824 13:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:00.083 13:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:00.083 13:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:00.341 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:00.341 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 165842 00:11:00.341 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:00.341 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:00.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.341 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:00.341 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:00.341 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:00.342 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:00.342 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:00.342 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:00.342 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:00.342 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:00.342 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:00.342 nvmf hotplug test: fio failed as expected 00:11:00.342 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:00.600 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:00.600 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:00.600 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:00.600 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:00.600 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:00.600 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:00.600 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:00.600 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:00.600 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:00.600 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:00.600 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:00.600 rmmod nvme_tcp 00:11:00.600 rmmod nvme_fabrics 00:11:00.600 rmmod nvme_keyring 00:11:00.600 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:00.600 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:00.600 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:00.600 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 162765 ']' 00:11:00.600 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 162765 00:11:00.600 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 162765 ']' 00:11:00.600 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 162765 00:11:00.600 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:00.600 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:00.600 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 162765 00:11:00.859 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:00.859 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:00.859 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 162765' 00:11:00.859 killing process with pid 162765 00:11:00.859 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 162765 00:11:00.859 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 162765 00:11:00.859 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:00.859 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:00.859 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:00.859 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:00.859 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:00.859 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.859 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.859 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:03.431 00:11:03.431 real 0m28.397s 00:11:03.431 user 2m3.032s 00:11:03.431 sys 0m9.857s 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.431 ************************************ 00:11:03.431 END TEST nvmf_fio_target 00:11:03.431 ************************************ 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:03.431 ************************************ 00:11:03.431 START TEST nvmf_bdevio 00:11:03.431 ************************************ 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:03.431 * Looking for test storage... 00:11:03.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.431 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:03.432 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:03.432 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:03.432 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.432 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.432 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.432 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:03.432 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:03.432 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:03.432 13:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:03.432 13:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:03.432 13:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:03.432 13:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:03.432 13:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.432 13:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:03.432 13:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:03.432 13:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:03.432 13:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.432 13:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.432 13:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.432 13:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:03.432 13:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:03.432 13:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:11:03.432 13:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:09.998 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:09.998 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:09.998 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:09.999 Found net devices under 0000:af:00.0: cvl_0_0 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:09.999 Found net devices under 0000:af:00.1: cvl_0_1 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:09.999 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:10.258 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:10.258 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:10.258 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:10.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:10.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:11:10.258 00:11:10.258 --- 10.0.0.2 ping statistics --- 00:11:10.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.258 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:11:10.258 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:10.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:10.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:11:10.258 00:11:10.258 --- 10.0.0.1 ping statistics --- 00:11:10.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.258 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:11:10.258 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:10.258 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:11:10.258 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:10.258 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:10.258 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:10.258 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:10.258 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:10.258 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:10.258 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:10.258 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:10.258 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:10.258 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:10.258 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:10.258 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=171035 00:11:10.258 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:10.258 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 171035 00:11:10.258 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 171035 ']' 00:11:10.258 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.258 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:10.258 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.258 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:10.258 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:10.258 [2024-07-25 13:39:07.081426] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:11:10.258 [2024-07-25 13:39:07.081476] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.258 EAL: No free 2048 kB hugepages reported on node 1 00:11:10.258 [2024-07-25 13:39:07.124597] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:10.517 [2024-07-25 13:39:07.160191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:10.517 [2024-07-25 13:39:07.198998] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.517 [2024-07-25 13:39:07.199039] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.517 [2024-07-25 13:39:07.199050] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:10.517 [2024-07-25 13:39:07.199059] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:10.517 [2024-07-25 13:39:07.199067] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.517 [2024-07-25 13:39:07.199186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:10.517 [2024-07-25 13:39:07.199295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:10.518 [2024-07-25 13:39:07.199426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:10.518 [2024-07-25 13:39:07.199427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:11.086 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:11.086 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:11.086 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:11.086 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:11.086 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:11.086 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.086 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:11.086 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.086 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:11.086 [2024-07-25 13:39:07.936182] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.086 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.086 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:11.086 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.086 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:11.086 Malloc0 00:11:11.086 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.086 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:11.086 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.086 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:11.086 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.086 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:11.086 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.086 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:11.345 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.345 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:11.345 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.345 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:11.345 [2024-07-25 13:39:07.982360] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:11.345 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.345 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:11.345 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:11.345 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:11.345 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:11.345 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:11.345 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:11.345 { 00:11:11.345 "params": { 00:11:11.345 "name": "Nvme$subsystem", 00:11:11.345 "trtype": "$TEST_TRANSPORT", 00:11:11.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:11.345 "adrfam": "ipv4", 00:11:11.345 "trsvcid": "$NVMF_PORT", 00:11:11.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:11.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:11.345 "hdgst": ${hdgst:-false}, 00:11:11.345 "ddgst": ${ddgst:-false} 00:11:11.345 }, 00:11:11.345 "method": "bdev_nvme_attach_controller" 00:11:11.345 } 00:11:11.345 EOF 00:11:11.345 )") 00:11:11.345 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:11.345 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:11.346 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:11.346 13:39:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:11.346 "params": { 00:11:11.346 "name": "Nvme1", 00:11:11.346 "trtype": "tcp", 00:11:11.346 "traddr": "10.0.0.2", 00:11:11.346 "adrfam": "ipv4", 00:11:11.346 "trsvcid": "4420", 00:11:11.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:11.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:11.346 "hdgst": false, 00:11:11.346 "ddgst": false 00:11:11.346 }, 00:11:11.346 "method": "bdev_nvme_attach_controller" 00:11:11.346 }' 00:11:11.346 [2024-07-25 13:39:08.014923] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:11:11.346 [2024-07-25 13:39:08.014976] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171319 ] 00:11:11.346 EAL: No free 2048 kB hugepages reported on node 1 00:11:11.346 [2024-07-25 13:39:08.052686] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:11.346 [2024-07-25 13:39:08.087252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:11.346 [2024-07-25 13:39:08.127851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.346 [2024-07-25 13:39:08.127947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.346 [2024-07-25 13:39:08.127949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.605 I/O targets: 00:11:11.605 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:11.605 00:11:11.605 00:11:11.605 CUnit - A unit testing framework for C - Version 2.1-3 00:11:11.605 http://cunit.sourceforge.net/ 00:11:11.605 00:11:11.605 00:11:11.605 Suite: bdevio tests on: Nvme1n1 00:11:11.605 Test: blockdev write read block ...passed 00:11:11.605 Test: blockdev write zeroes read block ...passed 00:11:11.864 Test: blockdev write zeroes read no split ...passed 00:11:11.864 Test: blockdev write zeroes read split ...passed 00:11:11.864 Test: blockdev write zeroes read split partial ...passed 00:11:11.864 Test: blockdev reset ...[2024-07-25 13:39:08.612453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:11.864 [2024-07-25 13:39:08.612513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2392ec0 (9): Bad file descriptor 00:11:11.864 [2024-07-25 13:39:08.634599] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:11.864 passed 00:11:11.864 Test: blockdev write read 8 blocks ...passed 00:11:11.864 Test: blockdev write read size > 128k ...passed 00:11:11.864 Test: blockdev write read invalid size ...passed 00:11:11.864 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:11.864 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:11.864 Test: blockdev write read max offset ...passed 00:11:12.123 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:12.123 Test: blockdev writev readv 8 blocks ...passed 00:11:12.123 Test: blockdev writev readv 30 x 1block ...passed 00:11:12.123 Test: blockdev writev readv block ...passed 00:11:12.123 Test: blockdev writev readv size > 128k ...passed 00:11:12.123 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:12.123 Test: blockdev comparev and writev ...[2024-07-25 13:39:08.890525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:12.124 [2024-07-25 13:39:08.890557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:12.124 [2024-07-25 13:39:08.890574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:12.124 [2024-07-25 13:39:08.890584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:12.124 [2024-07-25 13:39:08.890927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:12.124 [2024-07-25 13:39:08.890940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:12.124 [2024-07-25 13:39:08.890954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:12.124 [2024-07-25 13:39:08.890965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:12.124 [2024-07-25 13:39:08.891294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:12.124 [2024-07-25 13:39:08.891306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:12.124 [2024-07-25 13:39:08.891320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:12.124 [2024-07-25 13:39:08.891331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:12.124 [2024-07-25 13:39:08.891648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:12.124 [2024-07-25 13:39:08.891662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:12.124 [2024-07-25 13:39:08.891676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:12.124 [2024-07-25 13:39:08.891686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:12.124 passed 00:11:12.124 Test: blockdev nvme passthru rw ...passed 00:11:12.124 Test: blockdev nvme passthru vendor specific ...[2024-07-25 13:39:08.974330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:12.124 [2024-07-25 13:39:08.974348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:12.124 [2024-07-25 13:39:08.974548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:12.124 [2024-07-25 13:39:08.974560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:12.124 [2024-07-25 13:39:08.974757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:12.124 [2024-07-25 13:39:08.974770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:12.124 [2024-07-25 13:39:08.974963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:12.124 [2024-07-25 13:39:08.974976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:12.124 passed 00:11:12.124 Test: blockdev nvme admin passthru ...passed 00:11:12.383 Test: blockdev copy ...passed 00:11:12.383 00:11:12.383 Run Summary: Type Total Ran Passed Failed Inactive 00:11:12.383 suites 1 1 n/a 0 0 00:11:12.383 tests 23 23 23 0 0 00:11:12.383 asserts 152 152 152 0 n/a 00:11:12.383 00:11:12.383 Elapsed time = 1.277 seconds 00:11:12.383 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:12.383 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.383 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:12.383 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.383 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:12.383 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:12.383 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:12.383 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:12.383 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:12.383 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:12.383 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:12.383 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:12.384 rmmod nvme_tcp 00:11:12.384 rmmod nvme_fabrics 00:11:12.384 rmmod nvme_keyring 00:11:12.384 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:12.384 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:12.384 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:12.384 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 171035 ']' 00:11:12.384 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 171035 00:11:12.384 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 171035 ']' 00:11:12.384 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 171035 00:11:12.384 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:12.384 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:12.384 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 171035 00:11:12.643 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:12.643 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:12.643 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 171035' 00:11:12.643 killing process with pid 171035 00:11:12.643 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 171035 00:11:12.643 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 171035 00:11:12.643 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:12.643 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:12.643 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:12.643 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:12.643 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:12.643 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.643 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.643 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:15.180 00:11:15.180 real 0m11.707s 00:11:15.180 user 0m13.376s 00:11:15.180 sys 0m6.005s 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:15.180 ************************************ 00:11:15.180 END TEST nvmf_bdevio 00:11:15.180 ************************************ 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:15.180 00:11:15.180 real 4m51.287s 00:11:15.180 user 10m38.571s 00:11:15.180 sys 1m59.480s 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:15.180 ************************************ 00:11:15.180 END TEST nvmf_target_core 00:11:15.180 ************************************ 00:11:15.180 13:39:11 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:15.180 13:39:11 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:15.180 13:39:11 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:15.180 13:39:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:15.180 ************************************ 00:11:15.180 START TEST nvmf_target_extra 00:11:15.180 ************************************ 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:15.180 * Looking for test storage... 00:11:15.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:15.180 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:15.181 ************************************ 00:11:15.181 START TEST nvmf_example 00:11:15.181 ************************************ 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:15.181 * Looking for test storage... 00:11:15.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:15.181 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:15.181 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:15.181 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:15.181 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:15.181 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:15.181 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:15.181 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:15.181 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:15.181 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:15.181 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:15.181 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.181 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:15.181 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:15.181 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:15.181 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:15.181 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:15.181 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:15.181 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.181 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.181 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.181 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:15.181 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:15.181 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:11:15.181 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.753 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.753 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:11:21.753 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:21.753 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:21.753 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:21.754 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:21.754 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:21.754 Found net devices under 0000:af:00.0: cvl_0_0 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:21.754 Found net devices under 0000:af:00.1: cvl_0_1 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:21.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:11:21.754 00:11:21.754 --- 10.0.0.2 ping statistics --- 00:11:21.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.754 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:21.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:11:21.754 00:11:21.754 --- 10.0.0.1 ping statistics --- 00:11:21.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.754 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.754 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:21.755 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:22.014 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:22.014 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:22.014 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:22.014 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.014 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:22.014 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:22.014 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=175306 00:11:22.014 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:22.014 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:22.014 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 175306 00:11:22.014 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 175306 ']' 00:11:22.014 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.014 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:22.014 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.014 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:22.014 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.014 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.951 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:22.951 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:22.951 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:22.951 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:22.951 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.951 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:22.951 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.951 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.951 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.951 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:22.951 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.951 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.951 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.951 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:22.951 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:22.951 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.951 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.951 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.952 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:22.952 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:22.952 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.952 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.952 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.952 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.952 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.952 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.952 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.952 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:22.952 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:22.952 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.993 Initializing NVMe Controllers 00:11:32.993 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:32.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:32.993 Initialization complete. Launching workers. 00:11:32.993 ======================================================== 00:11:32.993 Latency(us) 00:11:32.993 Device Information : IOPS MiB/s Average min max 00:11:32.993 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16673.85 65.13 3839.20 685.95 15456.60 00:11:32.993 ======================================================== 00:11:32.993 Total : 16673.85 65.13 3839.20 685.95 15456.60 00:11:32.993 00:11:32.993 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:32.993 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:32.993 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:32.993 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:11:32.993 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:32.993 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:11:32.993 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:32.993 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:32.993 rmmod nvme_tcp 00:11:32.993 rmmod nvme_fabrics 00:11:32.993 rmmod nvme_keyring 00:11:33.252 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:33.252 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:11:33.252 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:11:33.252 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 175306 ']' 00:11:33.252 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 175306 00:11:33.252 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 175306 ']' 00:11:33.252 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 175306 00:11:33.252 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:33.252 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:33.252 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 175306 00:11:33.252 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:33.252 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:33.252 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 175306' 00:11:33.252 killing process with pid 175306 00:11:33.252 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 175306 00:11:33.252 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 175306 00:11:33.252 nvmf threads initialize successfully 00:11:33.252 bdev subsystem init successfully 00:11:33.252 created a nvmf target service 00:11:33.252 create targets's poll groups done 00:11:33.252 all subsystems of target started 00:11:33.252 nvmf target is running 00:11:33.252 all subsystems of target stopped 00:11:33.252 destroy targets's poll groups done 00:11:33.252 destroyed the nvmf target service 00:11:33.252 bdev subsystem finish successfully 00:11:33.252 nvmf threads destroy successfully 00:11:33.252 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:33.252 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:33.252 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:33.252 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:33.252 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:33.252 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.252 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.252 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.787 00:11:35.787 real 0m20.376s 00:11:35.787 user 0m45.432s 00:11:35.787 sys 0m7.135s 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.787 ************************************ 00:11:35.787 END TEST nvmf_example 00:11:35.787 ************************************ 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:35.787 ************************************ 00:11:35.787 START TEST nvmf_filesystem 00:11:35.787 ************************************ 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:35.787 * Looking for test storage... 00:11:35.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:35.787 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:35.788 #define SPDK_CONFIG_H 00:11:35.788 #define SPDK_CONFIG_APPS 1 00:11:35.788 #define SPDK_CONFIG_ARCH native 00:11:35.788 #undef SPDK_CONFIG_ASAN 00:11:35.788 #undef SPDK_CONFIG_AVAHI 00:11:35.788 #undef SPDK_CONFIG_CET 00:11:35.788 #define SPDK_CONFIG_COVERAGE 1 00:11:35.788 #define SPDK_CONFIG_CROSS_PREFIX 00:11:35.788 #undef SPDK_CONFIG_CRYPTO 00:11:35.788 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:35.788 #undef SPDK_CONFIG_CUSTOMOCF 00:11:35.788 #undef SPDK_CONFIG_DAOS 00:11:35.788 #define SPDK_CONFIG_DAOS_DIR 00:11:35.788 #define SPDK_CONFIG_DEBUG 1 00:11:35.788 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:35.788 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:35.788 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:35.788 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:35.788 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:35.788 #undef SPDK_CONFIG_DPDK_UADK 00:11:35.788 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:35.788 #define SPDK_CONFIG_EXAMPLES 1 00:11:35.788 #undef SPDK_CONFIG_FC 00:11:35.788 #define SPDK_CONFIG_FC_PATH 00:11:35.788 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:35.788 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:35.788 #undef SPDK_CONFIG_FUSE 00:11:35.788 #undef SPDK_CONFIG_FUZZER 00:11:35.788 #define SPDK_CONFIG_FUZZER_LIB 00:11:35.788 #undef SPDK_CONFIG_GOLANG 00:11:35.788 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:35.788 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:35.788 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:35.788 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:35.788 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:35.788 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:35.788 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:35.788 #define SPDK_CONFIG_IDXD 1 00:11:35.788 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:35.788 #undef SPDK_CONFIG_IPSEC_MB 00:11:35.788 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:35.788 #define SPDK_CONFIG_ISAL 1 00:11:35.788 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:35.788 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:35.788 #define SPDK_CONFIG_LIBDIR 00:11:35.788 #undef SPDK_CONFIG_LTO 00:11:35.788 #define SPDK_CONFIG_MAX_LCORES 128 00:11:35.788 #define SPDK_CONFIG_NVME_CUSE 1 00:11:35.788 #undef SPDK_CONFIG_OCF 00:11:35.788 #define SPDK_CONFIG_OCF_PATH 00:11:35.788 #define SPDK_CONFIG_OPENSSL_PATH 00:11:35.788 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:35.788 #define SPDK_CONFIG_PGO_DIR 00:11:35.788 #undef SPDK_CONFIG_PGO_USE 00:11:35.788 #define SPDK_CONFIG_PREFIX /usr/local 00:11:35.788 #undef SPDK_CONFIG_RAID5F 00:11:35.788 #undef SPDK_CONFIG_RBD 00:11:35.788 #define SPDK_CONFIG_RDMA 1 00:11:35.788 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:35.788 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:35.788 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:35.788 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:35.788 #define SPDK_CONFIG_SHARED 1 00:11:35.788 #undef SPDK_CONFIG_SMA 00:11:35.788 #define SPDK_CONFIG_TESTS 1 00:11:35.788 #undef SPDK_CONFIG_TSAN 00:11:35.788 #define SPDK_CONFIG_UBLK 1 00:11:35.788 #define SPDK_CONFIG_UBSAN 1 00:11:35.788 #undef SPDK_CONFIG_UNIT_TESTS 00:11:35.788 #undef SPDK_CONFIG_URING 00:11:35.788 #define SPDK_CONFIG_URING_PATH 00:11:35.788 #undef SPDK_CONFIG_URING_ZNS 00:11:35.788 #undef SPDK_CONFIG_USDT 00:11:35.788 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:35.788 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:35.788 #define SPDK_CONFIG_VFIO_USER 1 00:11:35.788 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:35.788 #define SPDK_CONFIG_VHOST 1 00:11:35.788 #define SPDK_CONFIG_VIRTIO 1 00:11:35.788 #undef SPDK_CONFIG_VTUNE 00:11:35.788 #define SPDK_CONFIG_VTUNE_DIR 00:11:35.788 #define SPDK_CONFIG_WERROR 1 00:11:35.788 #define SPDK_CONFIG_WPDK_DIR 00:11:35.788 #undef SPDK_CONFIG_XNVME 00:11:35.788 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:35.788 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : main 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:35.789 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j112 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 177614 ]] 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 177614 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.cnLHJ0 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.cnLHJ0/tests/target /tmp/spdk.cnLHJ0 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=955215872 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4329213952 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=53758582784 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742276608 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=7983693824 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30861217792 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871138304 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=9920512 00:11:35.790 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12325425152 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348456960 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=23031808 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30870192128 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871138304 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=946176 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6174220288 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174224384 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:11:35.791 * Looking for test storage... 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=53758582784 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=10198286336 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:35.791 13:39:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:43.916 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:43.916 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:43.916 Found net devices under 0000:af:00.0: cvl_0_0 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:43.916 Found net devices under 0000:af:00.1: cvl_0_1 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:43.916 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:43.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:11:43.916 00:11:43.916 --- 10.0.0.2 ping statistics --- 00:11:43.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.917 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:43.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:11:43.917 00:11:43.917 --- 10.0.0.1 ping statistics --- 00:11:43.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.917 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.917 ************************************ 00:11:43.917 START TEST nvmf_filesystem_no_in_capsule 00:11:43.917 ************************************ 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=180908 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 180908 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 180908 ']' 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:43.917 13:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.917 [2024-07-25 13:39:39.805085] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:11:43.917 [2024-07-25 13:39:39.805130] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.917 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.917 [2024-07-25 13:39:39.844384] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:43.917 [2024-07-25 13:39:39.877965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.917 [2024-07-25 13:39:39.918175] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.917 [2024-07-25 13:39:39.918218] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.917 [2024-07-25 13:39:39.918228] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.917 [2024-07-25 13:39:39.918237] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.917 [2024-07-25 13:39:39.918244] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.917 [2024-07-25 13:39:39.918291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.917 [2024-07-25 13:39:39.918390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.917 [2024-07-25 13:39:39.918473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.917 [2024-07-25 13:39:39.918474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.917 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:43.917 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:43.917 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:43.917 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:43.917 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.917 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.917 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:43.917 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:43.917 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.917 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.917 [2024-07-25 13:39:40.671238] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.917 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.917 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:43.917 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.917 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.177 Malloc1 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.177 [2024-07-25 13:39:40.826059] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:44.177 { 00:11:44.177 "name": "Malloc1", 00:11:44.177 "aliases": [ 00:11:44.177 "8d6c34f1-dbff-41a1-8ebd-55bb885f35ec" 00:11:44.177 ], 00:11:44.177 "product_name": "Malloc disk", 00:11:44.177 "block_size": 512, 00:11:44.177 "num_blocks": 1048576, 00:11:44.177 "uuid": "8d6c34f1-dbff-41a1-8ebd-55bb885f35ec", 00:11:44.177 "assigned_rate_limits": { 00:11:44.177 "rw_ios_per_sec": 0, 00:11:44.177 "rw_mbytes_per_sec": 0, 00:11:44.177 "r_mbytes_per_sec": 0, 00:11:44.177 "w_mbytes_per_sec": 0 00:11:44.177 }, 00:11:44.177 "claimed": true, 00:11:44.177 "claim_type": "exclusive_write", 00:11:44.177 "zoned": false, 00:11:44.177 "supported_io_types": { 00:11:44.177 "read": true, 00:11:44.177 "write": true, 00:11:44.177 "unmap": true, 00:11:44.177 "flush": true, 00:11:44.177 "reset": true, 00:11:44.177 "nvme_admin": false, 00:11:44.177 "nvme_io": false, 00:11:44.177 "nvme_io_md": false, 00:11:44.177 "write_zeroes": true, 00:11:44.177 "zcopy": true, 00:11:44.177 "get_zone_info": false, 00:11:44.177 "zone_management": false, 00:11:44.177 "zone_append": false, 00:11:44.177 "compare": false, 00:11:44.177 "compare_and_write": false, 00:11:44.177 "abort": true, 00:11:44.177 "seek_hole": false, 00:11:44.177 "seek_data": false, 00:11:44.177 "copy": true, 00:11:44.177 "nvme_iov_md": false 00:11:44.177 }, 00:11:44.177 "memory_domains": [ 00:11:44.177 { 00:11:44.177 "dma_device_id": "system", 00:11:44.177 "dma_device_type": 1 00:11:44.177 }, 00:11:44.177 { 00:11:44.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.177 "dma_device_type": 2 00:11:44.177 } 00:11:44.177 ], 00:11:44.177 "driver_specific": {} 00:11:44.177 } 00:11:44.177 ]' 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:44.177 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:45.556 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:45.556 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:45.556 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.556 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:45.556 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:47.463 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:47.463 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:47.463 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:47.463 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:47.463 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:47.463 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:47.463 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:47.463 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:47.463 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:47.463 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:47.463 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:47.463 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:47.463 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:47.463 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:47.463 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:47.463 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:47.463 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:48.030 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:48.030 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:48.967 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:48.967 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:48.967 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:48.967 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:48.967 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.225 ************************************ 00:11:49.225 START TEST filesystem_ext4 00:11:49.225 ************************************ 00:11:49.225 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:49.225 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:49.225 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:49.225 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:49.225 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:49.225 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:49.225 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:49.225 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:49.225 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:49.225 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:49.225 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:49.225 mke2fs 1.46.5 (30-Dec-2021) 00:11:49.225 Discarding device blocks: 0/522240 done 00:11:49.225 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:49.225 Filesystem UUID: 6f08987f-3b36-44c0-bb1b-9689d246fe1c 00:11:49.225 Superblock backups stored on blocks: 00:11:49.225 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:49.225 00:11:49.225 Allocating group tables: 0/64 done 00:11:49.225 Writing inode tables: 0/64 done 00:11:49.225 Creating journal (8192 blocks): done 00:11:49.225 Writing superblocks and filesystem accounting information: 0/64 done 00:11:49.225 00:11:49.225 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:49.225 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:49.485 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:49.485 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:49.485 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:49.485 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:49.485 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:49.485 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:49.485 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 180908 00:11:49.485 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:49.485 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:49.485 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:49.485 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:49.744 00:11:49.744 real 0m0.487s 00:11:49.744 user 0m0.032s 00:11:49.744 sys 0m0.072s 00:11:49.744 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:49.744 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:49.744 ************************************ 00:11:49.744 END TEST filesystem_ext4 00:11:49.744 ************************************ 00:11:49.744 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:49.744 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:49.744 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:49.744 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.744 ************************************ 00:11:49.744 START TEST filesystem_btrfs 00:11:49.744 ************************************ 00:11:49.744 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:49.744 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:49.744 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:49.744 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:49.744 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:49.744 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:49.744 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:49.744 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:49.744 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:49.744 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:49.744 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:50.003 btrfs-progs v6.6.2 00:11:50.003 See https://btrfs.readthedocs.io for more information. 00:11:50.003 00:11:50.003 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:50.003 NOTE: several default settings have changed in version 5.15, please make sure 00:11:50.003 this does not affect your deployments: 00:11:50.003 - DUP for metadata (-m dup) 00:11:50.003 - enabled no-holes (-O no-holes) 00:11:50.003 - enabled free-space-tree (-R free-space-tree) 00:11:50.003 00:11:50.003 Label: (null) 00:11:50.003 UUID: 89974a04-134e-4f5f-bada-2c89033c223b 00:11:50.003 Node size: 16384 00:11:50.003 Sector size: 4096 00:11:50.003 Filesystem size: 510.00MiB 00:11:50.003 Block group profiles: 00:11:50.003 Data: single 8.00MiB 00:11:50.003 Metadata: DUP 32.00MiB 00:11:50.003 System: DUP 8.00MiB 00:11:50.003 SSD detected: yes 00:11:50.003 Zoned device: no 00:11:50.003 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:50.003 Runtime features: free-space-tree 00:11:50.003 Checksum: crc32c 00:11:50.003 Number of devices: 1 00:11:50.003 Devices: 00:11:50.003 ID SIZE PATH 00:11:50.003 1 510.00MiB /dev/nvme0n1p1 00:11:50.003 00:11:50.003 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:50.003 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:50.940 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:50.940 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:50.940 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:50.940 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:51.199 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:51.199 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:51.199 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 180908 00:11:51.199 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:51.199 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:51.199 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:51.199 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:51.199 00:11:51.199 real 0m1.426s 00:11:51.199 user 0m0.039s 00:11:51.199 sys 0m0.135s 00:11:51.199 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:51.199 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:51.199 ************************************ 00:11:51.199 END TEST filesystem_btrfs 00:11:51.199 ************************************ 00:11:51.199 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:51.199 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:51.199 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:51.199 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.200 ************************************ 00:11:51.200 START TEST filesystem_xfs 00:11:51.200 ************************************ 00:11:51.200 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:51.200 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:51.200 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:51.200 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:51.200 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:51.200 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:51.200 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:51.200 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:51.200 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:51.200 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:51.200 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:51.200 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:51.200 = sectsz=512 attr=2, projid32bit=1 00:11:51.200 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:51.200 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:51.200 data = bsize=4096 blocks=130560, imaxpct=25 00:11:51.200 = sunit=0 swidth=0 blks 00:11:51.200 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:51.200 log =internal log bsize=4096 blocks=16384, version=2 00:11:51.200 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:51.200 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:52.209 Discarding blocks...Done. 00:11:52.209 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:52.209 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:54.116 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:54.116 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:54.116 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:54.116 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:54.116 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:54.116 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:54.116 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 180908 00:11:54.116 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:54.116 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:54.116 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:54.116 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:54.116 00:11:54.116 real 0m2.682s 00:11:54.116 user 0m0.030s 00:11:54.116 sys 0m0.076s 00:11:54.116 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:54.116 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:54.116 ************************************ 00:11:54.116 END TEST filesystem_xfs 00:11:54.116 ************************************ 00:11:54.116 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:54.117 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:54.117 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:54.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.117 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:54.117 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:54.117 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:54.117 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.117 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:54.117 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.117 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:54.117 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.117 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.117 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.117 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.117 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:54.117 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 180908 00:11:54.117 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 180908 ']' 00:11:54.117 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 180908 00:11:54.117 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:54.117 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:54.117 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 180908 00:11:54.376 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:54.376 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:54.376 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 180908' 00:11:54.376 killing process with pid 180908 00:11:54.376 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 180908 00:11:54.376 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 180908 00:11:54.636 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:54.636 00:11:54.636 real 0m11.627s 00:11:54.636 user 0m45.375s 00:11:54.636 sys 0m1.771s 00:11:54.636 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:54.636 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.636 ************************************ 00:11:54.636 END TEST nvmf_filesystem_no_in_capsule 00:11:54.636 ************************************ 00:11:54.636 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:54.636 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:54.636 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:54.636 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:54.636 ************************************ 00:11:54.636 START TEST nvmf_filesystem_in_capsule 00:11:54.636 ************************************ 00:11:54.636 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:54.636 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:54.636 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:54.636 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:54.636 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:54.636 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.636 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=183144 00:11:54.636 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 183144 00:11:54.636 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:54.636 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 183144 ']' 00:11:54.636 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.636 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:54.636 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.636 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:54.637 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.637 [2024-07-25 13:39:51.515633] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:11:54.637 [2024-07-25 13:39:51.515681] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.896 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.896 [2024-07-25 13:39:51.555417] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:54.896 [2024-07-25 13:39:51.589564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:54.896 [2024-07-25 13:39:51.629819] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:54.896 [2024-07-25 13:39:51.629858] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:54.896 [2024-07-25 13:39:51.629868] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:54.896 [2024-07-25 13:39:51.629876] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:54.896 [2024-07-25 13:39:51.629900] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:54.896 [2024-07-25 13:39:51.629940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.896 [2024-07-25 13:39:51.630035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.896 [2024-07-25 13:39:51.630118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:54.896 [2024-07-25 13:39:51.630119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.464 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:55.464 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:55.464 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:55.464 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:55.464 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.723 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:55.723 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:55.723 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:55.723 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.723 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.723 [2024-07-25 13:39:52.377206] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:55.723 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.723 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:55.723 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.723 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.723 Malloc1 00:11:55.723 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.723 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:55.723 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.723 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.723 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.723 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:55.723 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.723 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.723 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.723 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:55.723 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.723 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.723 [2024-07-25 13:39:52.527649] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:55.724 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.724 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:55.724 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:55.724 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:55.724 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:55.724 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:55.724 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:55.724 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.724 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.724 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.724 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:55.724 { 00:11:55.724 "name": "Malloc1", 00:11:55.724 "aliases": [ 00:11:55.724 "6ae465a7-c808-444a-a409-f86d87d72afc" 00:11:55.724 ], 00:11:55.724 "product_name": "Malloc disk", 00:11:55.724 "block_size": 512, 00:11:55.724 "num_blocks": 1048576, 00:11:55.724 "uuid": "6ae465a7-c808-444a-a409-f86d87d72afc", 00:11:55.724 "assigned_rate_limits": { 00:11:55.724 "rw_ios_per_sec": 0, 00:11:55.724 "rw_mbytes_per_sec": 0, 00:11:55.724 "r_mbytes_per_sec": 0, 00:11:55.724 "w_mbytes_per_sec": 0 00:11:55.724 }, 00:11:55.724 "claimed": true, 00:11:55.724 "claim_type": "exclusive_write", 00:11:55.724 "zoned": false, 00:11:55.724 "supported_io_types": { 00:11:55.724 "read": true, 00:11:55.724 "write": true, 00:11:55.724 "unmap": true, 00:11:55.724 "flush": true, 00:11:55.724 "reset": true, 00:11:55.724 "nvme_admin": false, 00:11:55.724 "nvme_io": false, 00:11:55.724 "nvme_io_md": false, 00:11:55.724 "write_zeroes": true, 00:11:55.724 "zcopy": true, 00:11:55.724 "get_zone_info": false, 00:11:55.724 "zone_management": false, 00:11:55.724 "zone_append": false, 00:11:55.724 "compare": false, 00:11:55.724 "compare_and_write": false, 00:11:55.724 "abort": true, 00:11:55.724 "seek_hole": false, 00:11:55.724 "seek_data": false, 00:11:55.724 "copy": true, 00:11:55.724 "nvme_iov_md": false 00:11:55.724 }, 00:11:55.724 "memory_domains": [ 00:11:55.724 { 00:11:55.724 "dma_device_id": "system", 00:11:55.724 "dma_device_type": 1 00:11:55.724 }, 00:11:55.724 { 00:11:55.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.724 "dma_device_type": 2 00:11:55.724 } 00:11:55.724 ], 00:11:55.724 "driver_specific": {} 00:11:55.724 } 00:11:55.724 ]' 00:11:55.724 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:55.724 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:55.724 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:55.983 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:55.983 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:55.983 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:55.983 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:55.983 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:57.362 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:57.362 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:57.362 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.362 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:57.362 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:59.264 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:59.264 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:59.264 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.264 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:59.264 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.264 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:59.264 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:59.264 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:59.264 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:59.264 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:59.264 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:59.264 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:59.264 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:59.264 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:59.264 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:59.264 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:59.264 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:59.522 13:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:00.459 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:01.397 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:01.397 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:01.397 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:01.397 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:01.397 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.397 ************************************ 00:12:01.397 START TEST filesystem_in_capsule_ext4 00:12:01.397 ************************************ 00:12:01.397 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:01.397 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:01.397 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:01.397 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:01.397 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:01.397 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:01.397 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:01.397 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:01.397 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:01.397 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:01.397 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:01.397 mke2fs 1.46.5 (30-Dec-2021) 00:12:01.397 Discarding device blocks: 0/522240 done 00:12:01.397 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:01.397 Filesystem UUID: a72b88d8-1e16-4d99-98bf-274a565485ed 00:12:01.397 Superblock backups stored on blocks: 00:12:01.397 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:01.397 00:12:01.397 Allocating group tables: 0/64 done 00:12:01.397 Writing inode tables: 0/64 done 00:12:01.656 Creating journal (8192 blocks): done 00:12:02.481 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:12:02.481 00:12:02.481 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:02.481 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:02.481 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:02.740 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:02.740 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:02.740 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:02.740 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:02.740 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:02.740 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 183144 00:12:02.740 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:02.740 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:02.740 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:02.740 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:02.740 00:12:02.740 real 0m1.427s 00:12:02.740 user 0m0.033s 00:12:02.740 sys 0m0.077s 00:12:02.740 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:02.740 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:02.740 ************************************ 00:12:02.740 END TEST filesystem_in_capsule_ext4 00:12:02.740 ************************************ 00:12:02.740 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:02.740 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:02.740 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:02.740 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.740 ************************************ 00:12:02.740 START TEST filesystem_in_capsule_btrfs 00:12:02.740 ************************************ 00:12:02.740 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:02.740 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:02.740 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:02.740 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:02.740 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:02.740 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:02.740 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:02.740 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:02.740 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:02.740 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:02.740 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:02.999 btrfs-progs v6.6.2 00:12:02.999 See https://btrfs.readthedocs.io for more information. 00:12:02.999 00:12:02.999 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:02.999 NOTE: several default settings have changed in version 5.15, please make sure 00:12:02.999 this does not affect your deployments: 00:12:02.999 - DUP for metadata (-m dup) 00:12:02.999 - enabled no-holes (-O no-holes) 00:12:02.999 - enabled free-space-tree (-R free-space-tree) 00:12:02.999 00:12:02.999 Label: (null) 00:12:02.999 UUID: 614c396e-2364-42ae-b330-3b09c4a4bd80 00:12:02.999 Node size: 16384 00:12:02.999 Sector size: 4096 00:12:02.999 Filesystem size: 510.00MiB 00:12:02.999 Block group profiles: 00:12:02.999 Data: single 8.00MiB 00:12:02.999 Metadata: DUP 32.00MiB 00:12:02.999 System: DUP 8.00MiB 00:12:02.999 SSD detected: yes 00:12:02.999 Zoned device: no 00:12:02.999 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:02.999 Runtime features: free-space-tree 00:12:02.999 Checksum: crc32c 00:12:02.999 Number of devices: 1 00:12:02.999 Devices: 00:12:02.999 ID SIZE PATH 00:12:02.999 1 510.00MiB /dev/nvme0n1p1 00:12:02.999 00:12:02.999 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:02.999 13:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:03.934 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:03.934 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:03.934 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:03.934 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:03.934 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:03.934 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:03.934 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 183144 00:12:03.934 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:03.934 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:03.934 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:03.934 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:03.934 00:12:03.934 real 0m1.173s 00:12:03.934 user 0m0.026s 00:12:03.934 sys 0m0.144s 00:12:03.934 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:03.934 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:03.934 ************************************ 00:12:03.934 END TEST filesystem_in_capsule_btrfs 00:12:03.934 ************************************ 00:12:03.934 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:03.934 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:03.934 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:03.934 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.193 ************************************ 00:12:04.193 START TEST filesystem_in_capsule_xfs 00:12:04.193 ************************************ 00:12:04.193 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:04.193 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:04.193 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:04.193 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:04.193 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:04.193 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:04.193 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:04.193 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:04.193 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:04.193 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:04.193 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:04.193 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:04.193 = sectsz=512 attr=2, projid32bit=1 00:12:04.193 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:04.193 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:04.193 data = bsize=4096 blocks=130560, imaxpct=25 00:12:04.193 = sunit=0 swidth=0 blks 00:12:04.193 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:04.193 log =internal log bsize=4096 blocks=16384, version=2 00:12:04.193 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:04.193 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:05.129 Discarding blocks...Done. 00:12:05.129 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:05.129 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:07.665 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:07.665 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:07.665 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:07.666 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:07.666 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:07.666 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:07.666 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 183144 00:12:07.666 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:07.666 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:07.666 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:07.666 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:07.666 00:12:07.666 real 0m3.503s 00:12:07.666 user 0m0.033s 00:12:07.666 sys 0m0.081s 00:12:07.666 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.666 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:07.666 ************************************ 00:12:07.666 END TEST filesystem_in_capsule_xfs 00:12:07.666 ************************************ 00:12:07.666 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:07.666 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:07.666 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:07.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.927 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:07.927 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:07.927 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:07.927 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.927 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:07.927 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.927 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:07.927 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.927 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.927 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.927 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.927 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:07.927 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 183144 00:12:07.927 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 183144 ']' 00:12:07.927 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 183144 00:12:07.927 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:07.927 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:07.927 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 183144 00:12:07.927 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:07.927 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:07.927 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 183144' 00:12:07.927 killing process with pid 183144 00:12:07.927 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 183144 00:12:07.927 13:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 183144 00:12:08.230 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:08.230 00:12:08.230 real 0m13.577s 00:12:08.230 user 0m53.067s 00:12:08.230 sys 0m1.875s 00:12:08.230 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:08.230 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.230 ************************************ 00:12:08.230 END TEST nvmf_filesystem_in_capsule 00:12:08.230 ************************************ 00:12:08.230 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:08.230 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:08.230 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:12:08.230 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:08.230 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:12:08.230 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:08.230 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:08.230 rmmod nvme_tcp 00:12:08.230 rmmod nvme_fabrics 00:12:08.230 rmmod nvme_keyring 00:12:08.490 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:08.490 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:12:08.490 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:12:08.490 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:08.490 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:08.490 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:08.490 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:08.490 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:08.490 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:08.490 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.490 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.490 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.397 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:10.397 00:12:10.397 real 0m34.859s 00:12:10.397 user 1m40.481s 00:12:10.397 sys 0m9.312s 00:12:10.397 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:10.397 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:10.397 ************************************ 00:12:10.397 END TEST nvmf_filesystem 00:12:10.397 ************************************ 00:12:10.397 13:40:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:10.397 13:40:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:10.397 13:40:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:10.397 13:40:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:10.397 ************************************ 00:12:10.397 START TEST nvmf_target_discovery 00:12:10.397 ************************************ 00:12:10.397 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:10.657 * Looking for test storage... 00:12:10.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:10.657 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:10.658 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:10.658 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:10.658 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:10.658 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:10.658 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:10.658 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.658 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:10.658 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:10.658 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:10.658 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.658 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.658 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.658 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:10.658 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:10.658 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:12:10.658 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:17.229 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:17.229 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.229 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:17.230 Found net devices under 0000:af:00.0: cvl_0_0 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:17.230 Found net devices under 0000:af:00.1: cvl_0_1 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:17.230 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:17.230 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:17.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:17.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:12:17.230 00:12:17.230 --- 10.0.0.2 ping statistics --- 00:12:17.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.230 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:12:17.230 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:17.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:17.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:12:17.230 00:12:17.230 --- 10.0.0.1 ping statistics --- 00:12:17.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.230 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:12:17.230 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:17.230 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:12:17.230 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:17.230 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:17.230 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:17.230 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:17.230 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:17.230 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:17.230 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:17.230 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:17.230 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:17.230 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:17.230 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.230 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=189239 00:12:17.230 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:17.230 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 189239 00:12:17.230 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 189239 ']' 00:12:17.230 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.230 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:17.230 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.230 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:17.230 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.230 [2024-07-25 13:40:14.105773] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:12:17.230 [2024-07-25 13:40:14.105826] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.489 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.489 [2024-07-25 13:40:14.145866] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:17.489 [2024-07-25 13:40:14.176727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:17.489 [2024-07-25 13:40:14.217051] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.489 [2024-07-25 13:40:14.217092] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.489 [2024-07-25 13:40:14.217102] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:17.489 [2024-07-25 13:40:14.217111] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:17.489 [2024-07-25 13:40:14.217135] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.489 [2024-07-25 13:40:14.217179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.489 [2024-07-25 13:40:14.217272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.489 [2024-07-25 13:40:14.217357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:17.489 [2024-07-25 13:40:14.217358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.489 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:17.489 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:17.489 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:17.489 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:17.489 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.489 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.489 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:17.489 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.489 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.489 [2024-07-25 13:40:14.364022] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.489 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.489 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.748 Null1 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.748 [2024-07-25 13:40:14.416338] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.748 Null2 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.748 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.749 Null3 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.749 Null4 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:12:17.749 00:12:17.749 Discovery Log Number of Records 6, Generation counter 6 00:12:17.749 =====Discovery Log Entry 0====== 00:12:17.749 trtype: tcp 00:12:17.749 adrfam: ipv4 00:12:17.749 subtype: current discovery subsystem 00:12:17.749 treq: not required 00:12:17.749 portid: 0 00:12:17.749 trsvcid: 4420 00:12:17.749 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:17.749 traddr: 10.0.0.2 00:12:17.749 eflags: explicit discovery connections, duplicate discovery information 00:12:17.749 sectype: none 00:12:17.749 =====Discovery Log Entry 1====== 00:12:17.749 trtype: tcp 00:12:17.749 adrfam: ipv4 00:12:17.749 subtype: nvme subsystem 00:12:17.749 treq: not required 00:12:17.749 portid: 0 00:12:17.749 trsvcid: 4420 00:12:17.749 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:17.749 traddr: 10.0.0.2 00:12:17.749 eflags: none 00:12:17.749 sectype: none 00:12:17.749 =====Discovery Log Entry 2====== 00:12:17.749 trtype: tcp 00:12:17.749 adrfam: ipv4 00:12:17.749 subtype: nvme subsystem 00:12:17.749 treq: not required 00:12:17.749 portid: 0 00:12:17.749 trsvcid: 4420 00:12:17.749 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:17.749 traddr: 10.0.0.2 00:12:17.749 eflags: none 00:12:17.749 sectype: none 00:12:17.749 =====Discovery Log Entry 3====== 00:12:17.749 trtype: tcp 00:12:17.749 adrfam: ipv4 00:12:17.749 subtype: nvme subsystem 00:12:17.749 treq: not required 00:12:17.749 portid: 0 00:12:17.749 trsvcid: 4420 00:12:17.749 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:17.749 traddr: 10.0.0.2 00:12:17.749 eflags: none 00:12:17.749 sectype: none 00:12:17.749 =====Discovery Log Entry 4====== 00:12:17.749 trtype: tcp 00:12:17.749 adrfam: ipv4 00:12:17.749 subtype: nvme subsystem 00:12:17.749 treq: not required 00:12:17.749 portid: 0 00:12:17.749 trsvcid: 4420 00:12:17.749 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:17.749 traddr: 10.0.0.2 00:12:17.749 eflags: none 00:12:17.749 sectype: none 00:12:17.749 =====Discovery Log Entry 5====== 00:12:17.749 trtype: tcp 00:12:17.749 adrfam: ipv4 00:12:17.749 subtype: discovery subsystem referral 00:12:17.749 treq: not required 00:12:17.749 portid: 0 00:12:17.749 trsvcid: 4430 00:12:17.749 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:17.749 traddr: 10.0.0.2 00:12:17.749 eflags: none 00:12:17.749 sectype: none 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:17.749 Perform nvmf subsystem discovery via RPC 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.749 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.749 [ 00:12:17.749 { 00:12:17.749 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:17.749 "subtype": "Discovery", 00:12:17.749 "listen_addresses": [ 00:12:17.749 { 00:12:17.749 "trtype": "TCP", 00:12:17.749 "adrfam": "IPv4", 00:12:17.749 "traddr": "10.0.0.2", 00:12:17.749 "trsvcid": "4420" 00:12:17.749 } 00:12:17.749 ], 00:12:17.749 "allow_any_host": true, 00:12:17.749 "hosts": [] 00:12:17.749 }, 00:12:17.749 { 00:12:17.749 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:17.749 "subtype": "NVMe", 00:12:17.749 "listen_addresses": [ 00:12:17.749 { 00:12:17.749 "trtype": "TCP", 00:12:17.749 "adrfam": "IPv4", 00:12:17.749 "traddr": "10.0.0.2", 00:12:17.749 "trsvcid": "4420" 00:12:17.749 } 00:12:17.749 ], 00:12:17.749 "allow_any_host": true, 00:12:17.749 "hosts": [], 00:12:17.749 "serial_number": "SPDK00000000000001", 00:12:17.749 "model_number": "SPDK bdev Controller", 00:12:17.749 "max_namespaces": 32, 00:12:17.749 "min_cntlid": 1, 00:12:17.749 "max_cntlid": 65519, 00:12:17.749 "namespaces": [ 00:12:17.749 { 00:12:17.749 "nsid": 1, 00:12:17.749 "bdev_name": "Null1", 00:12:17.749 "name": "Null1", 00:12:18.009 "nguid": "3DA66166EE394A5483F91ADC07AA8138", 00:12:18.009 "uuid": "3da66166-ee39-4a54-83f9-1adc07aa8138" 00:12:18.009 } 00:12:18.009 ] 00:12:18.009 }, 00:12:18.009 { 00:12:18.009 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:18.009 "subtype": "NVMe", 00:12:18.009 "listen_addresses": [ 00:12:18.009 { 00:12:18.009 "trtype": "TCP", 00:12:18.009 "adrfam": "IPv4", 00:12:18.009 "traddr": "10.0.0.2", 00:12:18.009 "trsvcid": "4420" 00:12:18.009 } 00:12:18.009 ], 00:12:18.009 "allow_any_host": true, 00:12:18.009 "hosts": [], 00:12:18.009 "serial_number": "SPDK00000000000002", 00:12:18.009 "model_number": "SPDK bdev Controller", 00:12:18.009 "max_namespaces": 32, 00:12:18.009 "min_cntlid": 1, 00:12:18.009 "max_cntlid": 65519, 00:12:18.009 "namespaces": [ 00:12:18.009 { 00:12:18.009 "nsid": 1, 00:12:18.009 "bdev_name": "Null2", 00:12:18.009 "name": "Null2", 00:12:18.009 "nguid": "2284FAB2916E4E5BB7C520303456CFBB", 00:12:18.009 "uuid": "2284fab2-916e-4e5b-b7c5-20303456cfbb" 00:12:18.009 } 00:12:18.009 ] 00:12:18.009 }, 00:12:18.009 { 00:12:18.009 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:18.009 "subtype": "NVMe", 00:12:18.009 "listen_addresses": [ 00:12:18.009 { 00:12:18.009 "trtype": "TCP", 00:12:18.009 "adrfam": "IPv4", 00:12:18.009 "traddr": "10.0.0.2", 00:12:18.009 "trsvcid": "4420" 00:12:18.009 } 00:12:18.009 ], 00:12:18.009 "allow_any_host": true, 00:12:18.009 "hosts": [], 00:12:18.009 "serial_number": "SPDK00000000000003", 00:12:18.009 "model_number": "SPDK bdev Controller", 00:12:18.009 "max_namespaces": 32, 00:12:18.009 "min_cntlid": 1, 00:12:18.009 "max_cntlid": 65519, 00:12:18.009 "namespaces": [ 00:12:18.009 { 00:12:18.009 "nsid": 1, 00:12:18.009 "bdev_name": "Null3", 00:12:18.009 "name": "Null3", 00:12:18.009 "nguid": "36AA45D5956C4DBEBFAD17E2BDA8E28F", 00:12:18.009 "uuid": "36aa45d5-956c-4dbe-bfad-17e2bda8e28f" 00:12:18.009 } 00:12:18.009 ] 00:12:18.009 }, 00:12:18.009 { 00:12:18.009 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:18.009 "subtype": "NVMe", 00:12:18.009 "listen_addresses": [ 00:12:18.009 { 00:12:18.009 "trtype": "TCP", 00:12:18.009 "adrfam": "IPv4", 00:12:18.009 "traddr": "10.0.0.2", 00:12:18.009 "trsvcid": "4420" 00:12:18.009 } 00:12:18.009 ], 00:12:18.009 "allow_any_host": true, 00:12:18.009 "hosts": [], 00:12:18.009 "serial_number": "SPDK00000000000004", 00:12:18.009 "model_number": "SPDK bdev Controller", 00:12:18.009 "max_namespaces": 32, 00:12:18.009 "min_cntlid": 1, 00:12:18.009 "max_cntlid": 65519, 00:12:18.009 "namespaces": [ 00:12:18.009 { 00:12:18.009 "nsid": 1, 00:12:18.009 "bdev_name": "Null4", 00:12:18.009 "name": "Null4", 00:12:18.009 "nguid": "E31A9B01F1A5493FAE3C140266CC264F", 00:12:18.009 "uuid": "e31a9b01-f1a5-493f-ae3c-140266cc264f" 00:12:18.009 } 00:12:18.009 ] 00:12:18.009 } 00:12:18.009 ] 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.009 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:18.010 rmmod nvme_tcp 00:12:18.010 rmmod nvme_fabrics 00:12:18.010 rmmod nvme_keyring 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 189239 ']' 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 189239 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 189239 ']' 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 189239 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:18.010 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 189239 00:12:18.269 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:18.269 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:18.269 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 189239' 00:12:18.269 killing process with pid 189239 00:12:18.269 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 189239 00:12:18.269 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 189239 00:12:18.269 13:40:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:18.269 13:40:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:18.269 13:40:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:18.269 13:40:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:18.269 13:40:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:18.269 13:40:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.269 13:40:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.269 13:40:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.824 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:20.824 00:12:20.824 real 0m9.892s 00:12:20.824 user 0m5.382s 00:12:20.824 sys 0m5.374s 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.825 ************************************ 00:12:20.825 END TEST nvmf_target_discovery 00:12:20.825 ************************************ 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:20.825 ************************************ 00:12:20.825 START TEST nvmf_referrals 00:12:20.825 ************************************ 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:20.825 * Looking for test storage... 00:12:20.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:12:20.825 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:27.397 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:27.397 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.397 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:27.398 Found net devices under 0000:af:00.0: cvl_0_0 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:27.398 Found net devices under 0000:af:00.1: cvl_0_1 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:27.398 13:40:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:27.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:12:27.398 00:12:27.398 --- 10.0.0.2 ping statistics --- 00:12:27.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.398 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:27.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:12:27.398 00:12:27.398 --- 10.0.0.1 ping statistics --- 00:12:27.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.398 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=193015 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 193015 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 193015 ']' 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:27.398 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.657 [2024-07-25 13:40:24.321530] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:12:27.657 [2024-07-25 13:40:24.321583] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.657 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.657 [2024-07-25 13:40:24.363374] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:27.657 [2024-07-25 13:40:24.397657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:27.657 [2024-07-25 13:40:24.437590] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.658 [2024-07-25 13:40:24.437631] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.658 [2024-07-25 13:40:24.437641] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.658 [2024-07-25 13:40:24.437649] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.658 [2024-07-25 13:40:24.437673] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.658 [2024-07-25 13:40:24.437726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.658 [2024-07-25 13:40:24.437749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.658 [2024-07-25 13:40:24.437832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.658 [2024-07-25 13:40:24.437834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.596 [2024-07-25 13:40:25.182101] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.596 [2024-07-25 13:40:25.198288] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.596 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:28.856 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:28.856 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:28.856 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:28.856 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.856 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.856 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.856 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:28.856 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.856 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.856 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.856 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:28.856 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.856 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.856 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.856 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:28.856 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:28.856 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.856 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.856 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.856 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:28.856 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:28.856 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:28.856 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:28.856 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.856 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:28.856 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:28.856 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:29.115 13:40:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:29.374 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:29.374 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:29.374 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:29.374 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:29.374 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:29.374 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:29.374 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:29.374 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:29.374 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.374 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.374 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.374 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:29.374 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:29.374 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:29.374 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:29.375 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.375 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.375 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:29.375 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.375 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:29.375 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:29.375 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:29.375 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:29.375 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:29.375 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:29.375 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:29.375 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:29.633 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:29.633 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:29.633 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:29.633 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:29.633 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:29.633 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:29.633 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:29.633 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:29.633 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:29.633 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:29.633 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:29.633 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:29.633 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:29.892 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:29.892 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:29.892 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.892 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.892 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.892 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:29.892 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:29.892 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.892 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.892 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.892 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:29.892 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:29.892 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:29.892 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:29.892 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:29.892 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:29.892 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:30.154 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:30.154 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:30.154 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:30.154 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:30.154 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:30.154 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:12:30.154 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:30.154 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:12:30.154 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:30.154 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:30.154 rmmod nvme_tcp 00:12:30.154 rmmod nvme_fabrics 00:12:30.154 rmmod nvme_keyring 00:12:30.154 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:30.154 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:12:30.154 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:12:30.154 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 193015 ']' 00:12:30.154 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 193015 00:12:30.154 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 193015 ']' 00:12:30.154 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 193015 00:12:30.154 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:30.154 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:30.154 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 193015 00:12:30.154 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:30.154 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:30.154 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 193015' 00:12:30.154 killing process with pid 193015 00:12:30.154 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 193015 00:12:30.154 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 193015 00:12:30.448 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:30.448 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:30.448 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:30.448 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:30.448 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:30.448 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.448 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.448 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.353 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:32.353 00:12:32.353 real 0m11.920s 00:12:32.353 user 0m13.291s 00:12:32.353 sys 0m6.040s 00:12:32.353 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:32.353 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.353 ************************************ 00:12:32.353 END TEST nvmf_referrals 00:12:32.353 ************************************ 00:12:32.353 13:40:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:32.353 13:40:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:32.353 13:40:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:32.353 13:40:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:32.613 ************************************ 00:12:32.613 START TEST nvmf_connect_disconnect 00:12:32.613 ************************************ 00:12:32.613 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:32.613 * Looking for test storage... 00:12:32.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:32.613 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.613 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:32.613 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.613 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.613 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.613 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.613 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.613 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.613 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.613 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:12:32.614 13:40:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:39.182 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:39.182 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:39.182 Found net devices under 0000:af:00.0: cvl_0_0 00:12:39.182 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:39.183 Found net devices under 0000:af:00.1: cvl_0_1 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:39.183 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:39.442 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:39.442 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:39.442 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:39.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:39.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:12:39.442 00:12:39.442 --- 10.0.0.2 ping statistics --- 00:12:39.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.442 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:12:39.442 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:39.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:39.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:12:39.442 00:12:39.442 --- 10.0.0.1 ping statistics --- 00:12:39.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.442 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:12:39.442 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:39.442 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:12:39.442 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:39.442 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:39.442 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:39.442 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:39.442 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:39.442 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:39.442 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:39.442 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:39.442 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:39.442 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:39.442 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:39.442 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=197247 00:12:39.442 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:39.442 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 197247 00:12:39.442 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 197247 ']' 00:12:39.442 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.442 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:39.442 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.442 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:39.442 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:39.442 [2024-07-25 13:40:36.203505] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:12:39.442 [2024-07-25 13:40:36.203559] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.442 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.442 [2024-07-25 13:40:36.243626] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:39.442 [2024-07-25 13:40:36.278570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:39.442 [2024-07-25 13:40:36.319583] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.442 [2024-07-25 13:40:36.319627] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.442 [2024-07-25 13:40:36.319636] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.442 [2024-07-25 13:40:36.319645] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.442 [2024-07-25 13:40:36.319652] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.442 [2024-07-25 13:40:36.319697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.442 [2024-07-25 13:40:36.319796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.442 [2024-07-25 13:40:36.319810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:39.442 [2024-07-25 13:40:36.319812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.378 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:40.378 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:40.379 [2024-07-25 13:40:37.071081] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:40.379 [2024-07-25 13:40:37.125572] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:40.379 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:42.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.139 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.139 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:33.139 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:33.139 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:33.139 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:16:33.139 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:33.139 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:16:33.139 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:33.139 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:33.139 rmmod nvme_tcp 00:16:33.139 rmmod nvme_fabrics 00:16:33.398 rmmod nvme_keyring 00:16:33.398 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:33.398 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:16:33.398 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:16:33.398 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 197247 ']' 00:16:33.398 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 197247 00:16:33.398 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 197247 ']' 00:16:33.398 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 197247 00:16:33.398 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:16:33.398 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:33.398 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 197247 00:16:33.398 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:33.398 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:33.398 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 197247' 00:16:33.398 killing process with pid 197247 00:16:33.398 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 197247 00:16:33.398 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 197247 00:16:33.658 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:33.658 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:33.658 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:33.658 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:33.658 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:33.658 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.658 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:33.658 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.560 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:35.560 00:16:35.560 real 4m3.129s 00:16:35.560 user 15m12.108s 00:16:35.560 sys 0m40.250s 00:16:35.560 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:35.560 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:35.560 ************************************ 00:16:35.560 END TEST nvmf_connect_disconnect 00:16:35.560 ************************************ 00:16:35.560 13:44:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:35.560 13:44:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:35.560 13:44:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:35.560 13:44:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:35.820 ************************************ 00:16:35.820 START TEST nvmf_multitarget 00:16:35.820 ************************************ 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:35.820 * Looking for test storage... 00:16:35.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:35.820 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:35.821 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:16:35.821 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:42.390 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:42.390 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:16:42.390 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:42.390 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:42.390 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:42.390 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:42.390 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:42.390 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:16:42.390 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:42.390 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:16:42.390 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:16:42.390 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:16:42.390 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:16:42.390 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:16:42.390 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:16:42.390 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:42.390 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:42.390 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:42.390 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:42.390 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:42.391 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:42.391 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:42.391 Found net devices under 0000:af:00.0: cvl_0_0 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:42.391 Found net devices under 0000:af:00.1: cvl_0_1 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:42.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:42.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:16:42.391 00:16:42.391 --- 10.0.0.2 ping statistics --- 00:16:42.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.391 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:42.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:42.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:16:42.391 00:16:42.391 --- 10.0.0.1 ping statistics --- 00:16:42.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.391 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:42.391 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:42.391 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:42.391 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:42.391 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:42.391 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:42.391 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=241758 00:16:42.391 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 241758 00:16:42.391 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 241758 ']' 00:16:42.391 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.391 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:42.391 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.391 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:42.392 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:42.392 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:42.392 [2024-07-25 13:44:39.069406] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:16:42.392 [2024-07-25 13:44:39.069456] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:42.392 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.392 [2024-07-25 13:44:39.109770] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:42.392 [2024-07-25 13:44:39.144855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:42.392 [2024-07-25 13:44:39.185571] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:42.392 [2024-07-25 13:44:39.185613] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:42.392 [2024-07-25 13:44:39.185622] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:42.392 [2024-07-25 13:44:39.185631] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:42.392 [2024-07-25 13:44:39.185642] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:42.392 [2024-07-25 13:44:39.185692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.392 [2024-07-25 13:44:39.185808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:42.392 [2024-07-25 13:44:39.185832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:42.392 [2024-07-25 13:44:39.185834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.329 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:43.329 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:16:43.329 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:43.329 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:43.329 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:43.329 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.329 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:43.329 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:43.329 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:43.329 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:43.329 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:43.329 "nvmf_tgt_1" 00:16:43.329 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:43.588 "nvmf_tgt_2" 00:16:43.588 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:43.588 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:43.588 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:43.588 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:43.588 true 00:16:43.588 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:43.847 true 00:16:43.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:43.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:43.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:43.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:43.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:43.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:43.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:16:43.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:43.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:16:43.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:43.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:43.847 rmmod nvme_tcp 00:16:43.847 rmmod nvme_fabrics 00:16:43.847 rmmod nvme_keyring 00:16:43.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:43.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:16:43.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:16:43.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 241758 ']' 00:16:43.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 241758 00:16:43.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 241758 ']' 00:16:43.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 241758 00:16:43.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:16:43.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:43.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 241758 00:16:44.107 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:44.107 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:44.107 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 241758' 00:16:44.107 killing process with pid 241758 00:16:44.107 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 241758 00:16:44.107 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 241758 00:16:44.107 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:44.107 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:44.107 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:44.107 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:44.107 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:44.107 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.107 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:44.107 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.645 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:46.645 00:16:46.645 real 0m10.545s 00:16:46.645 user 0m9.407s 00:16:46.645 sys 0m5.351s 00:16:46.645 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:46.645 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:46.645 ************************************ 00:16:46.645 END TEST nvmf_multitarget 00:16:46.645 ************************************ 00:16:46.645 13:44:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:46.645 13:44:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:46.645 13:44:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:46.645 13:44:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:46.645 ************************************ 00:16:46.645 START TEST nvmf_rpc 00:16:46.645 ************************************ 00:16:46.645 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:46.645 * Looking for test storage... 00:16:46.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:46.645 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:46.645 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:46.645 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.645 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:16:46.646 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.220 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:53.220 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:16:53.220 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:53.220 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:53.220 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:53.220 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:53.220 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:53.220 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:16:53.220 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:53.220 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:16:53.220 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:16:53.220 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:16:53.220 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:53.221 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:53.221 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:53.221 Found net devices under 0000:af:00.0: cvl_0_0 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:53.221 Found net devices under 0000:af:00.1: cvl_0_1 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:53.221 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:53.221 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:53.221 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:16:53.221 00:16:53.221 --- 10.0.0.2 ping statistics --- 00:16:53.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.222 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:16:53.222 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:53.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:53.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:16:53.222 00:16:53.222 --- 10.0.0.1 ping statistics --- 00:16:53.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.222 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:16:53.222 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:53.222 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:16:53.222 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:53.222 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:53.222 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:53.222 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:53.222 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:53.222 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:53.222 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:53.222 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:53.222 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:53.222 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:53.222 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.222 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=245737 00:16:53.222 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:53.222 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 245737 00:16:53.222 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 245737 ']' 00:16:53.222 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.222 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:53.222 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.222 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:53.222 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.222 [2024-07-25 13:44:50.073024] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:16:53.222 [2024-07-25 13:44:50.073081] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.482 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.482 [2024-07-25 13:44:50.115753] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:53.482 [2024-07-25 13:44:50.150005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:53.482 [2024-07-25 13:44:50.189968] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.482 [2024-07-25 13:44:50.190016] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.482 [2024-07-25 13:44:50.190026] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:53.482 [2024-07-25 13:44:50.190035] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:53.482 [2024-07-25 13:44:50.190042] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.482 [2024-07-25 13:44:50.190084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.482 [2024-07-25 13:44:50.190183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.482 [2024-07-25 13:44:50.190267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:53.482 [2024-07-25 13:44:50.190269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.051 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:54.051 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:16:54.051 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:54.051 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:54.051 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.051 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:54.051 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:54.051 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.051 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.311 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.311 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:54.311 "tick_rate": 2500000000, 00:16:54.311 "poll_groups": [ 00:16:54.311 { 00:16:54.311 "name": "nvmf_tgt_poll_group_000", 00:16:54.311 "admin_qpairs": 0, 00:16:54.311 "io_qpairs": 0, 00:16:54.311 "current_admin_qpairs": 0, 00:16:54.311 "current_io_qpairs": 0, 00:16:54.311 "pending_bdev_io": 0, 00:16:54.311 "completed_nvme_io": 0, 00:16:54.311 "transports": [] 00:16:54.311 }, 00:16:54.311 { 00:16:54.312 "name": "nvmf_tgt_poll_group_001", 00:16:54.312 "admin_qpairs": 0, 00:16:54.312 "io_qpairs": 0, 00:16:54.312 "current_admin_qpairs": 0, 00:16:54.312 "current_io_qpairs": 0, 00:16:54.312 "pending_bdev_io": 0, 00:16:54.312 "completed_nvme_io": 0, 00:16:54.312 "transports": [] 00:16:54.312 }, 00:16:54.312 { 00:16:54.312 "name": "nvmf_tgt_poll_group_002", 00:16:54.312 "admin_qpairs": 0, 00:16:54.312 "io_qpairs": 0, 00:16:54.312 "current_admin_qpairs": 0, 00:16:54.312 "current_io_qpairs": 0, 00:16:54.312 "pending_bdev_io": 0, 00:16:54.312 "completed_nvme_io": 0, 00:16:54.312 "transports": [] 00:16:54.312 }, 00:16:54.312 { 00:16:54.312 "name": "nvmf_tgt_poll_group_003", 00:16:54.312 "admin_qpairs": 0, 00:16:54.312 "io_qpairs": 0, 00:16:54.312 "current_admin_qpairs": 0, 00:16:54.312 "current_io_qpairs": 0, 00:16:54.312 "pending_bdev_io": 0, 00:16:54.312 "completed_nvme_io": 0, 00:16:54.312 "transports": [] 00:16:54.312 } 00:16:54.312 ] 00:16:54.312 }' 00:16:54.312 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:54.312 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:54.312 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:54.312 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:54.312 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:54.312 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:54.312 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:54.312 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:54.312 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.312 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.312 [2024-07-25 13:44:51.051554] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:54.312 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.312 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:54.312 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.312 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.312 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.312 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:54.312 "tick_rate": 2500000000, 00:16:54.312 "poll_groups": [ 00:16:54.312 { 00:16:54.312 "name": "nvmf_tgt_poll_group_000", 00:16:54.312 "admin_qpairs": 0, 00:16:54.312 "io_qpairs": 0, 00:16:54.312 "current_admin_qpairs": 0, 00:16:54.312 "current_io_qpairs": 0, 00:16:54.312 "pending_bdev_io": 0, 00:16:54.312 "completed_nvme_io": 0, 00:16:54.312 "transports": [ 00:16:54.312 { 00:16:54.312 "trtype": "TCP" 00:16:54.312 } 00:16:54.312 ] 00:16:54.312 }, 00:16:54.312 { 00:16:54.312 "name": "nvmf_tgt_poll_group_001", 00:16:54.312 "admin_qpairs": 0, 00:16:54.312 "io_qpairs": 0, 00:16:54.312 "current_admin_qpairs": 0, 00:16:54.312 "current_io_qpairs": 0, 00:16:54.312 "pending_bdev_io": 0, 00:16:54.312 "completed_nvme_io": 0, 00:16:54.312 "transports": [ 00:16:54.312 { 00:16:54.312 "trtype": "TCP" 00:16:54.312 } 00:16:54.312 ] 00:16:54.312 }, 00:16:54.312 { 00:16:54.312 "name": "nvmf_tgt_poll_group_002", 00:16:54.312 "admin_qpairs": 0, 00:16:54.312 "io_qpairs": 0, 00:16:54.312 "current_admin_qpairs": 0, 00:16:54.312 "current_io_qpairs": 0, 00:16:54.312 "pending_bdev_io": 0, 00:16:54.312 "completed_nvme_io": 0, 00:16:54.312 "transports": [ 00:16:54.312 { 00:16:54.312 "trtype": "TCP" 00:16:54.312 } 00:16:54.312 ] 00:16:54.312 }, 00:16:54.312 { 00:16:54.312 "name": "nvmf_tgt_poll_group_003", 00:16:54.312 "admin_qpairs": 0, 00:16:54.312 "io_qpairs": 0, 00:16:54.312 "current_admin_qpairs": 0, 00:16:54.312 "current_io_qpairs": 0, 00:16:54.312 "pending_bdev_io": 0, 00:16:54.312 "completed_nvme_io": 0, 00:16:54.312 "transports": [ 00:16:54.312 { 00:16:54.312 "trtype": "TCP" 00:16:54.312 } 00:16:54.312 ] 00:16:54.312 } 00:16:54.312 ] 00:16:54.312 }' 00:16:54.312 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:54.312 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:54.312 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:54.312 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:54.312 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:54.312 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:54.312 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:54.312 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:54.312 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:54.312 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:54.312 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:54.312 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:54.312 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:54.312 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:54.312 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.312 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.572 Malloc1 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.572 [2024-07-25 13:44:51.235237] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:54.572 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:16:54.573 [2024-07-25 13:44:51.270045] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:16:54.573 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:54.573 could not add new controller: failed to write to nvme-fabrics device 00:16:54.573 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:54.573 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:54.573 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:54.573 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:54.573 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:54.573 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.573 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.573 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.573 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:55.954 13:44:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:55.954 13:44:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:55.954 13:44:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:55.954 13:44:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:55.954 13:44:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:57.859 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:57.859 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:57.859 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:57.860 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:57.860 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:57.860 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:57.860 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:57.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.860 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:57.860 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:57.860 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:57.860 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.860 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:57.860 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:58.119 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:58.119 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:58.119 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.120 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.120 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.120 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:58.120 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:58.120 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:58.120 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:58.120 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:58.120 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:58.120 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:58.120 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:58.120 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:58.120 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:58.120 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:58.120 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:58.120 [2024-07-25 13:44:54.787125] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:16:58.120 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:58.120 could not add new controller: failed to write to nvme-fabrics device 00:16:58.120 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:58.120 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:58.120 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:58.120 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:58.120 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:58.120 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.120 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.120 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.120 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:59.572 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:59.572 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:59.572 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:59.572 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:59.572 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:01.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.478 [2024-07-25 13:44:58.345787] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.478 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.479 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:01.479 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.479 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.737 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.737 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:03.164 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:03.164 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:03.164 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:03.164 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:03.164 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:05.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.072 [2024-07-25 13:45:01.894108] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.072 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:06.451 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:06.451 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:06.451 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:06.451 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:06.451 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:08.357 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:08.357 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:08.357 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:08.357 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:08.357 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:08.357 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:08.357 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:08.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.618 [2024-07-25 13:45:05.393141] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.618 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:09.997 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:09.997 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:09.997 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:09.997 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:09.997 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:11.903 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:11.904 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:11.904 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:12.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.163 [2024-07-25 13:45:08.929400] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.163 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:13.539 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:13.539 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:13.539 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:13.539 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:13.539 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:15.444 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:15.444 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:15.444 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:15.444 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:15.444 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:15.444 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:15.445 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:15.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.704 [2024-07-25 13:45:12.469780] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.704 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:17.081 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:17.081 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:17.081 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:17.081 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:17.081 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:19.055 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:19.056 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:19.056 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:19.056 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:19.056 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:19.056 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:19.056 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:19.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:19.056 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:19.056 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:19.056 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:19.056 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:19.056 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:19.056 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:19.056 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:19.056 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:19.056 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.056 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.316 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.316 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.316 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.316 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.316 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.316 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:19.316 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:19.316 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:19.316 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.316 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.316 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.316 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.316 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.316 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.316 [2024-07-25 13:45:15.982468] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.316 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.316 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:19.316 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.316 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.316 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.316 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:19.316 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.316 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.316 [2024-07-25 13:45:16.030556] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:19.316 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.317 [2024-07-25 13:45:16.082703] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.317 [2024-07-25 13:45:16.130866] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.317 [2024-07-25 13:45:16.179024] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.317 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.576 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.576 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.576 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.576 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.576 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.576 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:19.576 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.576 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.576 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.576 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:19.576 "tick_rate": 2500000000, 00:17:19.576 "poll_groups": [ 00:17:19.576 { 00:17:19.576 "name": "nvmf_tgt_poll_group_000", 00:17:19.576 "admin_qpairs": 2, 00:17:19.576 "io_qpairs": 196, 00:17:19.576 "current_admin_qpairs": 0, 00:17:19.576 "current_io_qpairs": 0, 00:17:19.576 "pending_bdev_io": 0, 00:17:19.576 "completed_nvme_io": 285, 00:17:19.576 "transports": [ 00:17:19.576 { 00:17:19.576 "trtype": "TCP" 00:17:19.576 } 00:17:19.576 ] 00:17:19.576 }, 00:17:19.576 { 00:17:19.576 "name": "nvmf_tgt_poll_group_001", 00:17:19.576 "admin_qpairs": 2, 00:17:19.576 "io_qpairs": 196, 00:17:19.576 "current_admin_qpairs": 0, 00:17:19.576 "current_io_qpairs": 0, 00:17:19.576 "pending_bdev_io": 0, 00:17:19.576 "completed_nvme_io": 248, 00:17:19.576 "transports": [ 00:17:19.576 { 00:17:19.576 "trtype": "TCP" 00:17:19.576 } 00:17:19.576 ] 00:17:19.576 }, 00:17:19.576 { 00:17:19.576 "name": "nvmf_tgt_poll_group_002", 00:17:19.576 "admin_qpairs": 1, 00:17:19.576 "io_qpairs": 196, 00:17:19.576 "current_admin_qpairs": 0, 00:17:19.576 "current_io_qpairs": 0, 00:17:19.577 "pending_bdev_io": 0, 00:17:19.577 "completed_nvme_io": 345, 00:17:19.577 "transports": [ 00:17:19.577 { 00:17:19.577 "trtype": "TCP" 00:17:19.577 } 00:17:19.577 ] 00:17:19.577 }, 00:17:19.577 { 00:17:19.577 "name": "nvmf_tgt_poll_group_003", 00:17:19.577 "admin_qpairs": 2, 00:17:19.577 "io_qpairs": 196, 00:17:19.577 "current_admin_qpairs": 0, 00:17:19.577 "current_io_qpairs": 0, 00:17:19.577 "pending_bdev_io": 0, 00:17:19.577 "completed_nvme_io": 256, 00:17:19.577 "transports": [ 00:17:19.577 { 00:17:19.577 "trtype": "TCP" 00:17:19.577 } 00:17:19.577 ] 00:17:19.577 } 00:17:19.577 ] 00:17:19.577 }' 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:19.577 rmmod nvme_tcp 00:17:19.577 rmmod nvme_fabrics 00:17:19.577 rmmod nvme_keyring 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 245737 ']' 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 245737 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 245737 ']' 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 245737 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 245737 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 245737' 00:17:19.577 killing process with pid 245737 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 245737 00:17:19.577 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 245737 00:17:19.836 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:19.836 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:19.836 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:19.836 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:19.836 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:19.836 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.836 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:19.836 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:22.374 00:17:22.374 real 0m35.613s 00:17:22.374 user 1m46.524s 00:17:22.374 sys 0m8.145s 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.374 ************************************ 00:17:22.374 END TEST nvmf_rpc 00:17:22.374 ************************************ 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:22.374 ************************************ 00:17:22.374 START TEST nvmf_invalid 00:17:22.374 ************************************ 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:22.374 * Looking for test storage... 00:17:22.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:17:22.374 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:28.947 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:28.947 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.947 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:28.948 Found net devices under 0000:af:00.0: cvl_0_0 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:28.948 Found net devices under 0000:af:00.1: cvl_0_1 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:28.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:17:28.948 00:17:28.948 --- 10.0.0.2 ping statistics --- 00:17:28.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.948 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:28.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:28.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:17:28.948 00:17:28.948 --- 10.0.0.1 ping statistics --- 00:17:28.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.948 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=254511 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 254511 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 254511 ']' 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:28.948 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:28.948 [2024-07-25 13:45:25.769401] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:17:28.948 [2024-07-25 13:45:25.769455] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.948 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.948 [2024-07-25 13:45:25.809895] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:29.208 [2024-07-25 13:45:25.843377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:29.208 [2024-07-25 13:45:25.883614] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:29.208 [2024-07-25 13:45:25.883656] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:29.208 [2024-07-25 13:45:25.883666] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:29.208 [2024-07-25 13:45:25.883675] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:29.208 [2024-07-25 13:45:25.883682] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:29.208 [2024-07-25 13:45:25.883737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.208 [2024-07-25 13:45:25.883790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.208 [2024-07-25 13:45:25.883878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:29.208 [2024-07-25 13:45:25.883880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.776 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:29.776 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:17:29.776 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:29.776 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:29.776 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:29.776 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.776 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:29.776 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode28070 00:17:30.036 [2024-07-25 13:45:26.785686] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:30.036 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:30.036 { 00:17:30.036 "nqn": "nqn.2016-06.io.spdk:cnode28070", 00:17:30.036 "tgt_name": "foobar", 00:17:30.036 "method": "nvmf_create_subsystem", 00:17:30.036 "req_id": 1 00:17:30.036 } 00:17:30.036 Got JSON-RPC error response 00:17:30.036 response: 00:17:30.036 { 00:17:30.036 "code": -32603, 00:17:30.036 "message": "Unable to find target foobar" 00:17:30.036 }' 00:17:30.036 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:30.036 { 00:17:30.036 "nqn": "nqn.2016-06.io.spdk:cnode28070", 00:17:30.036 "tgt_name": "foobar", 00:17:30.036 "method": "nvmf_create_subsystem", 00:17:30.036 "req_id": 1 00:17:30.036 } 00:17:30.036 Got JSON-RPC error response 00:17:30.036 response: 00:17:30.036 { 00:17:30.036 "code": -32603, 00:17:30.036 "message": "Unable to find target foobar" 00:17:30.036 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:30.036 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:30.036 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode14654 00:17:30.294 [2024-07-25 13:45:26.982386] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14654: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:30.294 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:30.294 { 00:17:30.294 "nqn": "nqn.2016-06.io.spdk:cnode14654", 00:17:30.294 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:30.294 "method": "nvmf_create_subsystem", 00:17:30.294 "req_id": 1 00:17:30.294 } 00:17:30.295 Got JSON-RPC error response 00:17:30.295 response: 00:17:30.295 { 00:17:30.295 "code": -32602, 00:17:30.295 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:30.295 }' 00:17:30.295 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:30.295 { 00:17:30.295 "nqn": "nqn.2016-06.io.spdk:cnode14654", 00:17:30.295 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:30.295 "method": "nvmf_create_subsystem", 00:17:30.295 "req_id": 1 00:17:30.295 } 00:17:30.295 Got JSON-RPC error response 00:17:30.295 response: 00:17:30.295 { 00:17:30.295 "code": -32602, 00:17:30.295 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:30.295 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:30.295 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:30.295 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode1064 00:17:30.295 [2024-07-25 13:45:27.179006] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1064: invalid model number 'SPDK_Controller' 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:30.554 { 00:17:30.554 "nqn": "nqn.2016-06.io.spdk:cnode1064", 00:17:30.554 "model_number": "SPDK_Controller\u001f", 00:17:30.554 "method": "nvmf_create_subsystem", 00:17:30.554 "req_id": 1 00:17:30.554 } 00:17:30.554 Got JSON-RPC error response 00:17:30.554 response: 00:17:30.554 { 00:17:30.554 "code": -32602, 00:17:30.554 "message": "Invalid MN SPDK_Controller\u001f" 00:17:30.554 }' 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:30.554 { 00:17:30.554 "nqn": "nqn.2016-06.io.spdk:cnode1064", 00:17:30.554 "model_number": "SPDK_Controller\u001f", 00:17:30.554 "method": "nvmf_create_subsystem", 00:17:30.554 "req_id": 1 00:17:30.554 } 00:17:30.554 Got JSON-RPC error response 00:17:30.554 response: 00:17:30.554 { 00:17:30.554 "code": -32602, 00:17:30.554 "message": "Invalid MN SPDK_Controller\u001f" 00:17:30.554 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.554 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:30.555 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:30.555 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:30.555 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.555 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.555 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:17:30.555 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:30.555 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:17:30.555 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.555 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.555 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:30.555 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:30.555 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:30.555 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.555 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.555 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:30.555 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:30.555 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:30.555 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.555 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.555 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ t == \- ]] 00:17:30.555 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'ty^]#=` rYlrkm7HGh=i]' 00:17:30.555 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'ty^]#=` rYlrkm7HGh=i]' nqn.2016-06.io.spdk:cnode282 00:17:30.814 [2024-07-25 13:45:27.532173] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode282: invalid serial number 'ty^]#=` rYlrkm7HGh=i]' 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:30.814 { 00:17:30.814 "nqn": "nqn.2016-06.io.spdk:cnode282", 00:17:30.814 "serial_number": "ty^]#=` rYlrkm7HGh=i]", 00:17:30.814 "method": "nvmf_create_subsystem", 00:17:30.814 "req_id": 1 00:17:30.814 } 00:17:30.814 Got JSON-RPC error response 00:17:30.814 response: 00:17:30.814 { 00:17:30.814 "code": -32602, 00:17:30.814 "message": "Invalid SN ty^]#=` rYlrkm7HGh=i]" 00:17:30.814 }' 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:30.814 { 00:17:30.814 "nqn": "nqn.2016-06.io.spdk:cnode282", 00:17:30.814 "serial_number": "ty^]#=` rYlrkm7HGh=i]", 00:17:30.814 "method": "nvmf_create_subsystem", 00:17:30.814 "req_id": 1 00:17:30.814 } 00:17:30.814 Got JSON-RPC error response 00:17:30.814 response: 00:17:30.814 { 00:17:30.814 "code": -32602, 00:17:30.814 "message": "Invalid SN ty^]#=` rYlrkm7HGh=i]" 00:17:30.814 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:30.814 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:30.815 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.075 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:31.076 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:31.076 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:31.076 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:31.076 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:31.076 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 8 == \- ]] 00:17:31.076 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '811PJ9TTPKqe$GS>bn' 00:17:31.076 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '811PJ9TTPKqe$GS>bn' nqn.2016-06.io.spdk:cnode3651 00:17:31.334 [2024-07-25 13:45:28.033844] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3651: invalid model number '811PJ9TTPKqe$GS>bn' 00:17:31.334 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:31.334 { 00:17:31.334 "nqn": "nqn.2016-06.io.spdk:cnode3651", 00:17:31.334 "model_number": "811PJ9TTPKqe$GS>bn", 00:17:31.334 "method": "nvmf_create_subsystem", 00:17:31.334 "req_id": 1 00:17:31.334 } 00:17:31.334 Got JSON-RPC error response 00:17:31.334 response: 00:17:31.334 { 00:17:31.334 "code": -32602, 00:17:31.334 "message": "Invalid MN 811PJ9TTPKqe$GS>bn" 00:17:31.334 }' 00:17:31.334 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:31.334 { 00:17:31.334 "nqn": "nqn.2016-06.io.spdk:cnode3651", 00:17:31.334 "model_number": "811PJ9TTPKqe$GS>bn", 00:17:31.334 "method": "nvmf_create_subsystem", 00:17:31.334 "req_id": 1 00:17:31.334 } 00:17:31.334 Got JSON-RPC error response 00:17:31.334 response: 00:17:31.334 { 00:17:31.334 "code": -32602, 00:17:31.334 "message": "Invalid MN 811PJ9TTPKqe$GS>bn" 00:17:31.334 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:31.334 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:31.334 [2024-07-25 13:45:28.218509] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.593 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:31.593 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:31.593 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:31.593 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:31.593 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:31.593 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:31.851 [2024-07-25 13:45:28.611785] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:31.851 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:31.851 { 00:17:31.851 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:31.851 "listen_address": { 00:17:31.851 "trtype": "tcp", 00:17:31.851 "traddr": "", 00:17:31.851 "trsvcid": "4421" 00:17:31.851 }, 00:17:31.851 "method": "nvmf_subsystem_remove_listener", 00:17:31.851 "req_id": 1 00:17:31.851 } 00:17:31.851 Got JSON-RPC error response 00:17:31.851 response: 00:17:31.851 { 00:17:31.851 "code": -32602, 00:17:31.851 "message": "Invalid parameters" 00:17:31.851 }' 00:17:31.851 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:31.851 { 00:17:31.851 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:31.851 "listen_address": { 00:17:31.851 "trtype": "tcp", 00:17:31.851 "traddr": "", 00:17:31.851 "trsvcid": "4421" 00:17:31.851 }, 00:17:31.851 "method": "nvmf_subsystem_remove_listener", 00:17:31.851 "req_id": 1 00:17:31.851 } 00:17:31.851 Got JSON-RPC error response 00:17:31.851 response: 00:17:31.851 { 00:17:31.851 "code": -32602, 00:17:31.851 "message": "Invalid parameters" 00:17:31.851 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:31.851 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19702 -i 0 00:17:32.109 [2024-07-25 13:45:28.808352] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19702: invalid cntlid range [0-65519] 00:17:32.109 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:32.109 { 00:17:32.109 "nqn": "nqn.2016-06.io.spdk:cnode19702", 00:17:32.109 "min_cntlid": 0, 00:17:32.109 "method": "nvmf_create_subsystem", 00:17:32.109 "req_id": 1 00:17:32.109 } 00:17:32.109 Got JSON-RPC error response 00:17:32.109 response: 00:17:32.109 { 00:17:32.109 "code": -32602, 00:17:32.109 "message": "Invalid cntlid range [0-65519]" 00:17:32.109 }' 00:17:32.109 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:32.109 { 00:17:32.109 "nqn": "nqn.2016-06.io.spdk:cnode19702", 00:17:32.109 "min_cntlid": 0, 00:17:32.109 "method": "nvmf_create_subsystem", 00:17:32.109 "req_id": 1 00:17:32.109 } 00:17:32.109 Got JSON-RPC error response 00:17:32.109 response: 00:17:32.109 { 00:17:32.109 "code": -32602, 00:17:32.109 "message": "Invalid cntlid range [0-65519]" 00:17:32.109 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:32.109 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11680 -i 65520 00:17:32.368 [2024-07-25 13:45:29.001024] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11680: invalid cntlid range [65520-65519] 00:17:32.368 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:32.368 { 00:17:32.368 "nqn": "nqn.2016-06.io.spdk:cnode11680", 00:17:32.368 "min_cntlid": 65520, 00:17:32.368 "method": "nvmf_create_subsystem", 00:17:32.368 "req_id": 1 00:17:32.368 } 00:17:32.368 Got JSON-RPC error response 00:17:32.368 response: 00:17:32.368 { 00:17:32.368 "code": -32602, 00:17:32.368 "message": "Invalid cntlid range [65520-65519]" 00:17:32.368 }' 00:17:32.368 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:32.368 { 00:17:32.368 "nqn": "nqn.2016-06.io.spdk:cnode11680", 00:17:32.368 "min_cntlid": 65520, 00:17:32.368 "method": "nvmf_create_subsystem", 00:17:32.368 "req_id": 1 00:17:32.368 } 00:17:32.368 Got JSON-RPC error response 00:17:32.368 response: 00:17:32.368 { 00:17:32.368 "code": -32602, 00:17:32.368 "message": "Invalid cntlid range [65520-65519]" 00:17:32.368 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:32.368 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32286 -I 0 00:17:32.368 [2024-07-25 13:45:29.185585] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32286: invalid cntlid range [1-0] 00:17:32.368 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:32.368 { 00:17:32.368 "nqn": "nqn.2016-06.io.spdk:cnode32286", 00:17:32.368 "max_cntlid": 0, 00:17:32.368 "method": "nvmf_create_subsystem", 00:17:32.368 "req_id": 1 00:17:32.368 } 00:17:32.368 Got JSON-RPC error response 00:17:32.368 response: 00:17:32.368 { 00:17:32.368 "code": -32602, 00:17:32.368 "message": "Invalid cntlid range [1-0]" 00:17:32.368 }' 00:17:32.368 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:32.369 { 00:17:32.369 "nqn": "nqn.2016-06.io.spdk:cnode32286", 00:17:32.369 "max_cntlid": 0, 00:17:32.369 "method": "nvmf_create_subsystem", 00:17:32.369 "req_id": 1 00:17:32.369 } 00:17:32.369 Got JSON-RPC error response 00:17:32.369 response: 00:17:32.369 { 00:17:32.369 "code": -32602, 00:17:32.369 "message": "Invalid cntlid range [1-0]" 00:17:32.369 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:32.369 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode760 -I 65520 00:17:32.627 [2024-07-25 13:45:29.370196] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode760: invalid cntlid range [1-65520] 00:17:32.627 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:32.627 { 00:17:32.627 "nqn": "nqn.2016-06.io.spdk:cnode760", 00:17:32.627 "max_cntlid": 65520, 00:17:32.627 "method": "nvmf_create_subsystem", 00:17:32.627 "req_id": 1 00:17:32.627 } 00:17:32.627 Got JSON-RPC error response 00:17:32.627 response: 00:17:32.627 { 00:17:32.627 "code": -32602, 00:17:32.627 "message": "Invalid cntlid range [1-65520]" 00:17:32.627 }' 00:17:32.627 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:32.627 { 00:17:32.627 "nqn": "nqn.2016-06.io.spdk:cnode760", 00:17:32.627 "max_cntlid": 65520, 00:17:32.627 "method": "nvmf_create_subsystem", 00:17:32.627 "req_id": 1 00:17:32.627 } 00:17:32.627 Got JSON-RPC error response 00:17:32.627 response: 00:17:32.627 { 00:17:32.627 "code": -32602, 00:17:32.627 "message": "Invalid cntlid range [1-65520]" 00:17:32.627 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:32.627 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26206 -i 6 -I 5 00:17:32.887 [2024-07-25 13:45:29.554829] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26206: invalid cntlid range [6-5] 00:17:32.887 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:32.887 { 00:17:32.887 "nqn": "nqn.2016-06.io.spdk:cnode26206", 00:17:32.887 "min_cntlid": 6, 00:17:32.887 "max_cntlid": 5, 00:17:32.887 "method": "nvmf_create_subsystem", 00:17:32.887 "req_id": 1 00:17:32.887 } 00:17:32.887 Got JSON-RPC error response 00:17:32.887 response: 00:17:32.887 { 00:17:32.887 "code": -32602, 00:17:32.887 "message": "Invalid cntlid range [6-5]" 00:17:32.887 }' 00:17:32.887 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:32.887 { 00:17:32.887 "nqn": "nqn.2016-06.io.spdk:cnode26206", 00:17:32.887 "min_cntlid": 6, 00:17:32.887 "max_cntlid": 5, 00:17:32.887 "method": "nvmf_create_subsystem", 00:17:32.887 "req_id": 1 00:17:32.887 } 00:17:32.887 Got JSON-RPC error response 00:17:32.887 response: 00:17:32.887 { 00:17:32.887 "code": -32602, 00:17:32.887 "message": "Invalid cntlid range [6-5]" 00:17:32.887 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:32.887 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:32.887 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:32.887 { 00:17:32.887 "name": "foobar", 00:17:32.887 "method": "nvmf_delete_target", 00:17:32.887 "req_id": 1 00:17:32.887 } 00:17:32.887 Got JSON-RPC error response 00:17:32.887 response: 00:17:32.887 { 00:17:32.887 "code": -32602, 00:17:32.887 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:32.887 }' 00:17:32.887 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:32.887 { 00:17:32.887 "name": "foobar", 00:17:32.887 "method": "nvmf_delete_target", 00:17:32.887 "req_id": 1 00:17:32.887 } 00:17:32.887 Got JSON-RPC error response 00:17:32.887 response: 00:17:32.887 { 00:17:32.887 "code": -32602, 00:17:32.887 "message": "The specified target doesn't exist, cannot delete it." 00:17:32.887 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:32.887 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:32.887 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:32.887 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:32.887 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:17:32.887 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:32.887 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:17:32.887 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:32.887 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:32.887 rmmod nvme_tcp 00:17:32.887 rmmod nvme_fabrics 00:17:32.887 rmmod nvme_keyring 00:17:32.887 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:32.887 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:17:32.887 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:17:32.887 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 254511 ']' 00:17:32.887 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 254511 00:17:32.887 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 254511 ']' 00:17:32.887 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 254511 00:17:32.887 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:17:32.887 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:32.887 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 254511 00:17:33.146 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:33.146 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:33.146 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 254511' 00:17:33.146 killing process with pid 254511 00:17:33.146 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 254511 00:17:33.146 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 254511 00:17:33.146 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:33.146 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:33.146 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:33.146 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:33.146 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:33.146 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.146 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:33.146 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.686 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:35.686 00:17:35.686 real 0m13.251s 00:17:35.686 user 0m20.375s 00:17:35.686 sys 0m6.377s 00:17:35.686 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:35.686 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:35.686 ************************************ 00:17:35.686 END TEST nvmf_invalid 00:17:35.686 ************************************ 00:17:35.686 13:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:35.686 13:45:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:35.686 13:45:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:35.686 13:45:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:35.686 ************************************ 00:17:35.686 START TEST nvmf_connect_stress 00:17:35.686 ************************************ 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:35.687 * Looking for test storage... 00:17:35.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:17:35.687 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.292 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:42.292 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:17:42.292 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:42.292 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:42.292 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:42.292 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:42.292 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:42.292 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:42.293 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:42.293 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:42.293 Found net devices under 0000:af:00.0: cvl_0_0 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:42.293 Found net devices under 0000:af:00.1: cvl_0_1 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:42.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:42.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:17:42.293 00:17:42.293 --- 10.0.0.2 ping statistics --- 00:17:42.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.293 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:17:42.293 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:42.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:42.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:17:42.293 00:17:42.293 --- 10.0.0.1 ping statistics --- 00:17:42.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.293 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:17:42.293 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:42.293 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:17:42.293 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:42.293 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:42.293 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:42.293 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:42.293 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:42.293 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:42.293 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:42.293 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:42.293 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:42.294 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:42.294 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.294 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=258985 00:17:42.294 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:42.294 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 258985 00:17:42.294 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 258985 ']' 00:17:42.294 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.294 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:42.294 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.294 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:42.294 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.294 [2024-07-25 13:45:39.092459] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:17:42.294 [2024-07-25 13:45:39.092506] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.294 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.294 [2024-07-25 13:45:39.134665] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:42.294 [2024-07-25 13:45:39.165471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:42.553 [2024-07-25 13:45:39.205415] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.553 [2024-07-25 13:45:39.205458] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.553 [2024-07-25 13:45:39.205467] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.553 [2024-07-25 13:45:39.205476] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.553 [2024-07-25 13:45:39.205483] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.553 [2024-07-25 13:45:39.205542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:42.553 [2024-07-25 13:45:39.205627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:42.553 [2024-07-25 13:45:39.205629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.122 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:43.122 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:17:43.122 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:43.122 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:43.122 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.122 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.122 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:43.122 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.122 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.122 [2024-07-25 13:45:39.944269] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.122 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.122 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:43.122 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.122 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.122 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.122 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:43.122 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.122 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.122 [2024-07-25 13:45:39.980900] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.122 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.122 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:43.122 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.122 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.122 NULL1 00:17:43.122 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.122 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=259256 00:17:43.122 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:43.122 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:43.122 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:43.122 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:43.122 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:43.122 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:43.382 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.382 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.642 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.642 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:43.642 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.642 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.642 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.901 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.901 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:43.901 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.901 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.901 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.470 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.470 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:44.470 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.470 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.470 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.728 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.728 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:44.728 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.728 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.728 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.987 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.987 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:44.987 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.987 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.987 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.247 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.247 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:45.247 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.247 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.247 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.506 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.506 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:45.506 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.506 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.506 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.075 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.075 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:46.075 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.075 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.075 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.334 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.334 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:46.334 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.334 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.334 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.594 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.594 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:46.594 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.594 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.594 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.853 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.853 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:46.853 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.853 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.853 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.421 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.422 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:47.422 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.422 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.422 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.681 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.681 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:47.681 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.681 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.681 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.940 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.940 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:47.940 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.940 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.940 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.199 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.199 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:48.199 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.199 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.199 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.458 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.458 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:48.458 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.458 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.458 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.025 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.025 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:49.025 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.025 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.025 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.284 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.284 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:49.284 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.284 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.284 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.543 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.543 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:49.543 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.543 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.543 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.801 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.801 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:49.802 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.802 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.802 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.060 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.060 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:50.060 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.060 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.060 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.627 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.627 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:50.627 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.627 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.627 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.886 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.886 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:50.886 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.886 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.886 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.145 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.145 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:51.145 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.145 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.145 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.404 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.405 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:51.405 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.405 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.405 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.973 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.973 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:51.973 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.973 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.973 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.232 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.232 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:52.232 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.232 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.232 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.491 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.491 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:52.491 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.492 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.492 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.750 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.750 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:52.750 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.750 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.750 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.009 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.009 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:53.009 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.009 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.009 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.611 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:53.611 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.611 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 259256 00:17:53.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (259256) - No such process 00:17:53.611 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 259256 00:17:53.611 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:53.611 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:53.611 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:53.611 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:53.611 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:17:53.611 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:53.611 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:17:53.611 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:53.611 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:53.611 rmmod nvme_tcp 00:17:53.611 rmmod nvme_fabrics 00:17:53.611 rmmod nvme_keyring 00:17:53.611 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:53.611 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:17:53.611 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:17:53.611 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 258985 ']' 00:17:53.611 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 258985 00:17:53.611 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 258985 ']' 00:17:53.611 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 258985 00:17:53.611 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:17:53.611 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:53.611 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 258985 00:17:53.611 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:53.611 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:53.611 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 258985' 00:17:53.611 killing process with pid 258985 00:17:53.612 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 258985 00:17:53.612 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 258985 00:17:53.870 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:53.871 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:53.871 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:53.871 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:53.871 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:53.871 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.871 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:53.871 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.776 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:55.776 00:17:55.776 real 0m20.445s 00:17:55.776 user 0m40.848s 00:17:55.776 sys 0m10.066s 00:17:55.776 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:55.776 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.776 ************************************ 00:17:55.776 END TEST nvmf_connect_stress 00:17:55.776 ************************************ 00:17:55.776 13:45:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:55.776 13:45:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:55.776 13:45:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:55.776 13:45:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:55.776 ************************************ 00:17:55.776 START TEST nvmf_fused_ordering 00:17:55.776 ************************************ 00:17:55.776 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:56.036 * Looking for test storage... 00:17:56.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:56.036 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:56.037 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:56.037 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:56.037 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:56.037 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:56.037 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.037 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:56.037 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.037 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:56.037 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:56.037 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:17:56.037 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:02.607 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:02.607 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:02.607 Found net devices under 0000:af:00.0: cvl_0_0 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:02.607 Found net devices under 0000:af:00.1: cvl_0_1 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:02.607 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:02.608 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:02.608 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:02.608 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:02.608 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:02.608 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:02.608 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:02.608 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:02.608 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:02.608 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:02.608 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:02.608 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:02.608 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:02.608 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:02.608 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:02.608 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:02.608 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:02.867 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:02.867 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:02.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:02.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:18:02.867 00:18:02.867 --- 10.0.0.2 ping statistics --- 00:18:02.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.867 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:18:02.867 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:02.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:02.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:18:02.867 00:18:02.867 --- 10.0.0.1 ping statistics --- 00:18:02.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.867 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:18:02.867 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:02.867 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:18:02.867 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:02.867 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:02.867 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:02.867 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:02.867 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:02.867 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:02.867 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:02.867 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:02.867 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:02.867 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:02.867 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:02.867 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=264558 00:18:02.867 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:02.867 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 264558 00:18:02.867 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 264558 ']' 00:18:02.867 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.867 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:02.867 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.867 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:02.867 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:02.867 [2024-07-25 13:45:59.631094] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:18:02.867 [2024-07-25 13:45:59.631140] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.867 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.867 [2024-07-25 13:45:59.670896] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:02.867 [2024-07-25 13:45:59.705139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.867 [2024-07-25 13:45:59.743594] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.867 [2024-07-25 13:45:59.743635] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.867 [2024-07-25 13:45:59.743647] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.867 [2024-07-25 13:45:59.743656] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.867 [2024-07-25 13:45:59.743662] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.867 [2024-07-25 13:45:59.743684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:03.805 [2024-07-25 13:46:00.480249] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:03.805 [2024-07-25 13:46:00.500419] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:03.805 NULL1 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:03.805 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.806 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:03.806 [2024-07-25 13:46:00.555736] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:18:03.806 [2024-07-25 13:46:00.555774] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid264832 ] 00:18:03.806 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.806 [2024-07-25 13:46:00.594788] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:04.374 Attached to nqn.2016-06.io.spdk:cnode1 00:18:04.374 Namespace ID: 1 size: 1GB 00:18:04.374 fused_ordering(0) 00:18:04.374 fused_ordering(1) 00:18:04.374 fused_ordering(2) 00:18:04.374 fused_ordering(3) 00:18:04.374 fused_ordering(4) 00:18:04.374 fused_ordering(5) 00:18:04.375 fused_ordering(6) 00:18:04.375 fused_ordering(7) 00:18:04.375 fused_ordering(8) 00:18:04.375 fused_ordering(9) 00:18:04.375 fused_ordering(10) 00:18:04.375 fused_ordering(11) 00:18:04.375 fused_ordering(12) 00:18:04.375 fused_ordering(13) 00:18:04.375 fused_ordering(14) 00:18:04.375 fused_ordering(15) 00:18:04.375 fused_ordering(16) 00:18:04.375 fused_ordering(17) 00:18:04.375 fused_ordering(18) 00:18:04.375 fused_ordering(19) 00:18:04.375 fused_ordering(20) 00:18:04.375 fused_ordering(21) 00:18:04.375 fused_ordering(22) 00:18:04.375 fused_ordering(23) 00:18:04.375 fused_ordering(24) 00:18:04.375 fused_ordering(25) 00:18:04.375 fused_ordering(26) 00:18:04.375 fused_ordering(27) 00:18:04.375 fused_ordering(28) 00:18:04.375 fused_ordering(29) 00:18:04.375 fused_ordering(30) 00:18:04.375 fused_ordering(31) 00:18:04.375 fused_ordering(32) 00:18:04.375 fused_ordering(33) 00:18:04.375 fused_ordering(34) 00:18:04.375 fused_ordering(35) 00:18:04.375 fused_ordering(36) 00:18:04.375 fused_ordering(37) 00:18:04.375 fused_ordering(38) 00:18:04.375 fused_ordering(39) 00:18:04.375 fused_ordering(40) 00:18:04.375 fused_ordering(41) 00:18:04.375 fused_ordering(42) 00:18:04.375 fused_ordering(43) 00:18:04.375 fused_ordering(44) 00:18:04.375 fused_ordering(45) 00:18:04.375 fused_ordering(46) 00:18:04.375 fused_ordering(47) 00:18:04.375 fused_ordering(48) 00:18:04.375 fused_ordering(49) 00:18:04.375 fused_ordering(50) 00:18:04.375 fused_ordering(51) 00:18:04.375 fused_ordering(52) 00:18:04.375 fused_ordering(53) 00:18:04.375 fused_ordering(54) 00:18:04.375 fused_ordering(55) 00:18:04.375 fused_ordering(56) 00:18:04.375 fused_ordering(57) 00:18:04.375 fused_ordering(58) 00:18:04.375 fused_ordering(59) 00:18:04.375 fused_ordering(60) 00:18:04.375 fused_ordering(61) 00:18:04.375 fused_ordering(62) 00:18:04.375 fused_ordering(63) 00:18:04.375 fused_ordering(64) 00:18:04.375 fused_ordering(65) 00:18:04.375 fused_ordering(66) 00:18:04.375 fused_ordering(67) 00:18:04.375 fused_ordering(68) 00:18:04.375 fused_ordering(69) 00:18:04.375 fused_ordering(70) 00:18:04.375 fused_ordering(71) 00:18:04.375 fused_ordering(72) 00:18:04.375 fused_ordering(73) 00:18:04.375 fused_ordering(74) 00:18:04.375 fused_ordering(75) 00:18:04.375 fused_ordering(76) 00:18:04.375 fused_ordering(77) 00:18:04.375 fused_ordering(78) 00:18:04.375 fused_ordering(79) 00:18:04.375 fused_ordering(80) 00:18:04.375 fused_ordering(81) 00:18:04.375 fused_ordering(82) 00:18:04.375 fused_ordering(83) 00:18:04.375 fused_ordering(84) 00:18:04.375 fused_ordering(85) 00:18:04.375 fused_ordering(86) 00:18:04.375 fused_ordering(87) 00:18:04.375 fused_ordering(88) 00:18:04.375 fused_ordering(89) 00:18:04.375 fused_ordering(90) 00:18:04.375 fused_ordering(91) 00:18:04.375 fused_ordering(92) 00:18:04.375 fused_ordering(93) 00:18:04.375 fused_ordering(94) 00:18:04.375 fused_ordering(95) 00:18:04.375 fused_ordering(96) 00:18:04.375 fused_ordering(97) 00:18:04.375 fused_ordering(98) 00:18:04.375 fused_ordering(99) 00:18:04.375 fused_ordering(100) 00:18:04.375 fused_ordering(101) 00:18:04.375 fused_ordering(102) 00:18:04.375 fused_ordering(103) 00:18:04.375 fused_ordering(104) 00:18:04.375 fused_ordering(105) 00:18:04.375 fused_ordering(106) 00:18:04.375 fused_ordering(107) 00:18:04.375 fused_ordering(108) 00:18:04.375 fused_ordering(109) 00:18:04.375 fused_ordering(110) 00:18:04.375 fused_ordering(111) 00:18:04.375 fused_ordering(112) 00:18:04.375 fused_ordering(113) 00:18:04.375 fused_ordering(114) 00:18:04.375 fused_ordering(115) 00:18:04.375 fused_ordering(116) 00:18:04.375 fused_ordering(117) 00:18:04.375 fused_ordering(118) 00:18:04.375 fused_ordering(119) 00:18:04.375 fused_ordering(120) 00:18:04.375 fused_ordering(121) 00:18:04.375 fused_ordering(122) 00:18:04.375 fused_ordering(123) 00:18:04.375 fused_ordering(124) 00:18:04.375 fused_ordering(125) 00:18:04.375 fused_ordering(126) 00:18:04.375 fused_ordering(127) 00:18:04.375 fused_ordering(128) 00:18:04.375 fused_ordering(129) 00:18:04.375 fused_ordering(130) 00:18:04.375 fused_ordering(131) 00:18:04.375 fused_ordering(132) 00:18:04.375 fused_ordering(133) 00:18:04.375 fused_ordering(134) 00:18:04.375 fused_ordering(135) 00:18:04.375 fused_ordering(136) 00:18:04.375 fused_ordering(137) 00:18:04.375 fused_ordering(138) 00:18:04.375 fused_ordering(139) 00:18:04.375 fused_ordering(140) 00:18:04.375 fused_ordering(141) 00:18:04.375 fused_ordering(142) 00:18:04.375 fused_ordering(143) 00:18:04.375 fused_ordering(144) 00:18:04.375 fused_ordering(145) 00:18:04.375 fused_ordering(146) 00:18:04.375 fused_ordering(147) 00:18:04.375 fused_ordering(148) 00:18:04.375 fused_ordering(149) 00:18:04.375 fused_ordering(150) 00:18:04.375 fused_ordering(151) 00:18:04.375 fused_ordering(152) 00:18:04.375 fused_ordering(153) 00:18:04.375 fused_ordering(154) 00:18:04.375 fused_ordering(155) 00:18:04.375 fused_ordering(156) 00:18:04.375 fused_ordering(157) 00:18:04.375 fused_ordering(158) 00:18:04.375 fused_ordering(159) 00:18:04.375 fused_ordering(160) 00:18:04.375 fused_ordering(161) 00:18:04.375 fused_ordering(162) 00:18:04.375 fused_ordering(163) 00:18:04.375 fused_ordering(164) 00:18:04.375 fused_ordering(165) 00:18:04.375 fused_ordering(166) 00:18:04.375 fused_ordering(167) 00:18:04.375 fused_ordering(168) 00:18:04.375 fused_ordering(169) 00:18:04.375 fused_ordering(170) 00:18:04.375 fused_ordering(171) 00:18:04.375 fused_ordering(172) 00:18:04.375 fused_ordering(173) 00:18:04.375 fused_ordering(174) 00:18:04.375 fused_ordering(175) 00:18:04.375 fused_ordering(176) 00:18:04.375 fused_ordering(177) 00:18:04.375 fused_ordering(178) 00:18:04.375 fused_ordering(179) 00:18:04.375 fused_ordering(180) 00:18:04.375 fused_ordering(181) 00:18:04.375 fused_ordering(182) 00:18:04.375 fused_ordering(183) 00:18:04.375 fused_ordering(184) 00:18:04.375 fused_ordering(185) 00:18:04.375 fused_ordering(186) 00:18:04.375 fused_ordering(187) 00:18:04.375 fused_ordering(188) 00:18:04.375 fused_ordering(189) 00:18:04.375 fused_ordering(190) 00:18:04.375 fused_ordering(191) 00:18:04.375 fused_ordering(192) 00:18:04.375 fused_ordering(193) 00:18:04.375 fused_ordering(194) 00:18:04.375 fused_ordering(195) 00:18:04.375 fused_ordering(196) 00:18:04.375 fused_ordering(197) 00:18:04.375 fused_ordering(198) 00:18:04.375 fused_ordering(199) 00:18:04.375 fused_ordering(200) 00:18:04.375 fused_ordering(201) 00:18:04.375 fused_ordering(202) 00:18:04.375 fused_ordering(203) 00:18:04.375 fused_ordering(204) 00:18:04.375 fused_ordering(205) 00:18:04.635 fused_ordering(206) 00:18:04.635 fused_ordering(207) 00:18:04.635 fused_ordering(208) 00:18:04.635 fused_ordering(209) 00:18:04.635 fused_ordering(210) 00:18:04.635 fused_ordering(211) 00:18:04.635 fused_ordering(212) 00:18:04.635 fused_ordering(213) 00:18:04.635 fused_ordering(214) 00:18:04.635 fused_ordering(215) 00:18:04.635 fused_ordering(216) 00:18:04.635 fused_ordering(217) 00:18:04.635 fused_ordering(218) 00:18:04.635 fused_ordering(219) 00:18:04.635 fused_ordering(220) 00:18:04.635 fused_ordering(221) 00:18:04.635 fused_ordering(222) 00:18:04.635 fused_ordering(223) 00:18:04.635 fused_ordering(224) 00:18:04.635 fused_ordering(225) 00:18:04.635 fused_ordering(226) 00:18:04.635 fused_ordering(227) 00:18:04.635 fused_ordering(228) 00:18:04.635 fused_ordering(229) 00:18:04.635 fused_ordering(230) 00:18:04.635 fused_ordering(231) 00:18:04.635 fused_ordering(232) 00:18:04.635 fused_ordering(233) 00:18:04.635 fused_ordering(234) 00:18:04.635 fused_ordering(235) 00:18:04.635 fused_ordering(236) 00:18:04.635 fused_ordering(237) 00:18:04.635 fused_ordering(238) 00:18:04.635 fused_ordering(239) 00:18:04.635 fused_ordering(240) 00:18:04.635 fused_ordering(241) 00:18:04.635 fused_ordering(242) 00:18:04.635 fused_ordering(243) 00:18:04.635 fused_ordering(244) 00:18:04.635 fused_ordering(245) 00:18:04.635 fused_ordering(246) 00:18:04.635 fused_ordering(247) 00:18:04.635 fused_ordering(248) 00:18:04.635 fused_ordering(249) 00:18:04.635 fused_ordering(250) 00:18:04.635 fused_ordering(251) 00:18:04.635 fused_ordering(252) 00:18:04.635 fused_ordering(253) 00:18:04.635 fused_ordering(254) 00:18:04.635 fused_ordering(255) 00:18:04.635 fused_ordering(256) 00:18:04.635 fused_ordering(257) 00:18:04.635 fused_ordering(258) 00:18:04.635 fused_ordering(259) 00:18:04.635 fused_ordering(260) 00:18:04.635 fused_ordering(261) 00:18:04.635 fused_ordering(262) 00:18:04.635 fused_ordering(263) 00:18:04.635 fused_ordering(264) 00:18:04.635 fused_ordering(265) 00:18:04.635 fused_ordering(266) 00:18:04.635 fused_ordering(267) 00:18:04.635 fused_ordering(268) 00:18:04.635 fused_ordering(269) 00:18:04.635 fused_ordering(270) 00:18:04.635 fused_ordering(271) 00:18:04.635 fused_ordering(272) 00:18:04.635 fused_ordering(273) 00:18:04.635 fused_ordering(274) 00:18:04.635 fused_ordering(275) 00:18:04.635 fused_ordering(276) 00:18:04.635 fused_ordering(277) 00:18:04.635 fused_ordering(278) 00:18:04.635 fused_ordering(279) 00:18:04.635 fused_ordering(280) 00:18:04.635 fused_ordering(281) 00:18:04.635 fused_ordering(282) 00:18:04.635 fused_ordering(283) 00:18:04.635 fused_ordering(284) 00:18:04.635 fused_ordering(285) 00:18:04.635 fused_ordering(286) 00:18:04.635 fused_ordering(287) 00:18:04.635 fused_ordering(288) 00:18:04.635 fused_ordering(289) 00:18:04.635 fused_ordering(290) 00:18:04.635 fused_ordering(291) 00:18:04.635 fused_ordering(292) 00:18:04.635 fused_ordering(293) 00:18:04.635 fused_ordering(294) 00:18:04.635 fused_ordering(295) 00:18:04.635 fused_ordering(296) 00:18:04.635 fused_ordering(297) 00:18:04.635 fused_ordering(298) 00:18:04.635 fused_ordering(299) 00:18:04.635 fused_ordering(300) 00:18:04.635 fused_ordering(301) 00:18:04.635 fused_ordering(302) 00:18:04.635 fused_ordering(303) 00:18:04.635 fused_ordering(304) 00:18:04.635 fused_ordering(305) 00:18:04.635 fused_ordering(306) 00:18:04.635 fused_ordering(307) 00:18:04.635 fused_ordering(308) 00:18:04.635 fused_ordering(309) 00:18:04.635 fused_ordering(310) 00:18:04.635 fused_ordering(311) 00:18:04.635 fused_ordering(312) 00:18:04.635 fused_ordering(313) 00:18:04.635 fused_ordering(314) 00:18:04.635 fused_ordering(315) 00:18:04.635 fused_ordering(316) 00:18:04.635 fused_ordering(317) 00:18:04.635 fused_ordering(318) 00:18:04.635 fused_ordering(319) 00:18:04.635 fused_ordering(320) 00:18:04.635 fused_ordering(321) 00:18:04.635 fused_ordering(322) 00:18:04.635 fused_ordering(323) 00:18:04.635 fused_ordering(324) 00:18:04.635 fused_ordering(325) 00:18:04.635 fused_ordering(326) 00:18:04.635 fused_ordering(327) 00:18:04.635 fused_ordering(328) 00:18:04.635 fused_ordering(329) 00:18:04.635 fused_ordering(330) 00:18:04.635 fused_ordering(331) 00:18:04.635 fused_ordering(332) 00:18:04.635 fused_ordering(333) 00:18:04.635 fused_ordering(334) 00:18:04.635 fused_ordering(335) 00:18:04.635 fused_ordering(336) 00:18:04.635 fused_ordering(337) 00:18:04.635 fused_ordering(338) 00:18:04.635 fused_ordering(339) 00:18:04.635 fused_ordering(340) 00:18:04.635 fused_ordering(341) 00:18:04.635 fused_ordering(342) 00:18:04.635 fused_ordering(343) 00:18:04.635 fused_ordering(344) 00:18:04.635 fused_ordering(345) 00:18:04.635 fused_ordering(346) 00:18:04.635 fused_ordering(347) 00:18:04.635 fused_ordering(348) 00:18:04.635 fused_ordering(349) 00:18:04.635 fused_ordering(350) 00:18:04.635 fused_ordering(351) 00:18:04.635 fused_ordering(352) 00:18:04.635 fused_ordering(353) 00:18:04.635 fused_ordering(354) 00:18:04.635 fused_ordering(355) 00:18:04.635 fused_ordering(356) 00:18:04.635 fused_ordering(357) 00:18:04.635 fused_ordering(358) 00:18:04.635 fused_ordering(359) 00:18:04.635 fused_ordering(360) 00:18:04.635 fused_ordering(361) 00:18:04.635 fused_ordering(362) 00:18:04.635 fused_ordering(363) 00:18:04.635 fused_ordering(364) 00:18:04.635 fused_ordering(365) 00:18:04.635 fused_ordering(366) 00:18:04.635 fused_ordering(367) 00:18:04.635 fused_ordering(368) 00:18:04.635 fused_ordering(369) 00:18:04.635 fused_ordering(370) 00:18:04.635 fused_ordering(371) 00:18:04.635 fused_ordering(372) 00:18:04.635 fused_ordering(373) 00:18:04.635 fused_ordering(374) 00:18:04.635 fused_ordering(375) 00:18:04.635 fused_ordering(376) 00:18:04.635 fused_ordering(377) 00:18:04.635 fused_ordering(378) 00:18:04.635 fused_ordering(379) 00:18:04.635 fused_ordering(380) 00:18:04.635 fused_ordering(381) 00:18:04.635 fused_ordering(382) 00:18:04.635 fused_ordering(383) 00:18:04.635 fused_ordering(384) 00:18:04.635 fused_ordering(385) 00:18:04.635 fused_ordering(386) 00:18:04.635 fused_ordering(387) 00:18:04.635 fused_ordering(388) 00:18:04.635 fused_ordering(389) 00:18:04.635 fused_ordering(390) 00:18:04.635 fused_ordering(391) 00:18:04.635 fused_ordering(392) 00:18:04.635 fused_ordering(393) 00:18:04.635 fused_ordering(394) 00:18:04.635 fused_ordering(395) 00:18:04.635 fused_ordering(396) 00:18:04.635 fused_ordering(397) 00:18:04.635 fused_ordering(398) 00:18:04.635 fused_ordering(399) 00:18:04.635 fused_ordering(400) 00:18:04.635 fused_ordering(401) 00:18:04.635 fused_ordering(402) 00:18:04.635 fused_ordering(403) 00:18:04.635 fused_ordering(404) 00:18:04.635 fused_ordering(405) 00:18:04.635 fused_ordering(406) 00:18:04.635 fused_ordering(407) 00:18:04.635 fused_ordering(408) 00:18:04.635 fused_ordering(409) 00:18:04.635 fused_ordering(410) 00:18:05.204 fused_ordering(411) 00:18:05.204 fused_ordering(412) 00:18:05.204 fused_ordering(413) 00:18:05.204 fused_ordering(414) 00:18:05.204 fused_ordering(415) 00:18:05.204 fused_ordering(416) 00:18:05.204 fused_ordering(417) 00:18:05.204 fused_ordering(418) 00:18:05.204 fused_ordering(419) 00:18:05.204 fused_ordering(420) 00:18:05.204 fused_ordering(421) 00:18:05.204 fused_ordering(422) 00:18:05.204 fused_ordering(423) 00:18:05.204 fused_ordering(424) 00:18:05.204 fused_ordering(425) 00:18:05.204 fused_ordering(426) 00:18:05.204 fused_ordering(427) 00:18:05.204 fused_ordering(428) 00:18:05.204 fused_ordering(429) 00:18:05.204 fused_ordering(430) 00:18:05.204 fused_ordering(431) 00:18:05.204 fused_ordering(432) 00:18:05.204 fused_ordering(433) 00:18:05.204 fused_ordering(434) 00:18:05.204 fused_ordering(435) 00:18:05.204 fused_ordering(436) 00:18:05.204 fused_ordering(437) 00:18:05.204 fused_ordering(438) 00:18:05.204 fused_ordering(439) 00:18:05.204 fused_ordering(440) 00:18:05.204 fused_ordering(441) 00:18:05.204 fused_ordering(442) 00:18:05.204 fused_ordering(443) 00:18:05.204 fused_ordering(444) 00:18:05.204 fused_ordering(445) 00:18:05.204 fused_ordering(446) 00:18:05.204 fused_ordering(447) 00:18:05.204 fused_ordering(448) 00:18:05.204 fused_ordering(449) 00:18:05.204 fused_ordering(450) 00:18:05.204 fused_ordering(451) 00:18:05.204 fused_ordering(452) 00:18:05.204 fused_ordering(453) 00:18:05.204 fused_ordering(454) 00:18:05.204 fused_ordering(455) 00:18:05.204 fused_ordering(456) 00:18:05.204 fused_ordering(457) 00:18:05.204 fused_ordering(458) 00:18:05.204 fused_ordering(459) 00:18:05.204 fused_ordering(460) 00:18:05.204 fused_ordering(461) 00:18:05.204 fused_ordering(462) 00:18:05.204 fused_ordering(463) 00:18:05.204 fused_ordering(464) 00:18:05.204 fused_ordering(465) 00:18:05.204 fused_ordering(466) 00:18:05.204 fused_ordering(467) 00:18:05.204 fused_ordering(468) 00:18:05.204 fused_ordering(469) 00:18:05.204 fused_ordering(470) 00:18:05.204 fused_ordering(471) 00:18:05.204 fused_ordering(472) 00:18:05.204 fused_ordering(473) 00:18:05.204 fused_ordering(474) 00:18:05.204 fused_ordering(475) 00:18:05.204 fused_ordering(476) 00:18:05.204 fused_ordering(477) 00:18:05.204 fused_ordering(478) 00:18:05.204 fused_ordering(479) 00:18:05.204 fused_ordering(480) 00:18:05.204 fused_ordering(481) 00:18:05.204 fused_ordering(482) 00:18:05.204 fused_ordering(483) 00:18:05.204 fused_ordering(484) 00:18:05.204 fused_ordering(485) 00:18:05.204 fused_ordering(486) 00:18:05.204 fused_ordering(487) 00:18:05.204 fused_ordering(488) 00:18:05.204 fused_ordering(489) 00:18:05.204 fused_ordering(490) 00:18:05.204 fused_ordering(491) 00:18:05.204 fused_ordering(492) 00:18:05.204 fused_ordering(493) 00:18:05.204 fused_ordering(494) 00:18:05.204 fused_ordering(495) 00:18:05.204 fused_ordering(496) 00:18:05.204 fused_ordering(497) 00:18:05.204 fused_ordering(498) 00:18:05.204 fused_ordering(499) 00:18:05.204 fused_ordering(500) 00:18:05.204 fused_ordering(501) 00:18:05.204 fused_ordering(502) 00:18:05.204 fused_ordering(503) 00:18:05.204 fused_ordering(504) 00:18:05.204 fused_ordering(505) 00:18:05.204 fused_ordering(506) 00:18:05.204 fused_ordering(507) 00:18:05.204 fused_ordering(508) 00:18:05.205 fused_ordering(509) 00:18:05.205 fused_ordering(510) 00:18:05.205 fused_ordering(511) 00:18:05.205 fused_ordering(512) 00:18:05.205 fused_ordering(513) 00:18:05.205 fused_ordering(514) 00:18:05.205 fused_ordering(515) 00:18:05.205 fused_ordering(516) 00:18:05.205 fused_ordering(517) 00:18:05.205 fused_ordering(518) 00:18:05.205 fused_ordering(519) 00:18:05.205 fused_ordering(520) 00:18:05.205 fused_ordering(521) 00:18:05.205 fused_ordering(522) 00:18:05.205 fused_ordering(523) 00:18:05.205 fused_ordering(524) 00:18:05.205 fused_ordering(525) 00:18:05.205 fused_ordering(526) 00:18:05.205 fused_ordering(527) 00:18:05.205 fused_ordering(528) 00:18:05.205 fused_ordering(529) 00:18:05.205 fused_ordering(530) 00:18:05.205 fused_ordering(531) 00:18:05.205 fused_ordering(532) 00:18:05.205 fused_ordering(533) 00:18:05.205 fused_ordering(534) 00:18:05.205 fused_ordering(535) 00:18:05.205 fused_ordering(536) 00:18:05.205 fused_ordering(537) 00:18:05.205 fused_ordering(538) 00:18:05.205 fused_ordering(539) 00:18:05.205 fused_ordering(540) 00:18:05.205 fused_ordering(541) 00:18:05.205 fused_ordering(542) 00:18:05.205 fused_ordering(543) 00:18:05.205 fused_ordering(544) 00:18:05.205 fused_ordering(545) 00:18:05.205 fused_ordering(546) 00:18:05.205 fused_ordering(547) 00:18:05.205 fused_ordering(548) 00:18:05.205 fused_ordering(549) 00:18:05.205 fused_ordering(550) 00:18:05.205 fused_ordering(551) 00:18:05.205 fused_ordering(552) 00:18:05.205 fused_ordering(553) 00:18:05.205 fused_ordering(554) 00:18:05.205 fused_ordering(555) 00:18:05.205 fused_ordering(556) 00:18:05.205 fused_ordering(557) 00:18:05.205 fused_ordering(558) 00:18:05.205 fused_ordering(559) 00:18:05.205 fused_ordering(560) 00:18:05.205 fused_ordering(561) 00:18:05.205 fused_ordering(562) 00:18:05.205 fused_ordering(563) 00:18:05.205 fused_ordering(564) 00:18:05.205 fused_ordering(565) 00:18:05.205 fused_ordering(566) 00:18:05.205 fused_ordering(567) 00:18:05.205 fused_ordering(568) 00:18:05.205 fused_ordering(569) 00:18:05.205 fused_ordering(570) 00:18:05.205 fused_ordering(571) 00:18:05.205 fused_ordering(572) 00:18:05.205 fused_ordering(573) 00:18:05.205 fused_ordering(574) 00:18:05.205 fused_ordering(575) 00:18:05.205 fused_ordering(576) 00:18:05.205 fused_ordering(577) 00:18:05.205 fused_ordering(578) 00:18:05.205 fused_ordering(579) 00:18:05.205 fused_ordering(580) 00:18:05.205 fused_ordering(581) 00:18:05.205 fused_ordering(582) 00:18:05.205 fused_ordering(583) 00:18:05.205 fused_ordering(584) 00:18:05.205 fused_ordering(585) 00:18:05.205 fused_ordering(586) 00:18:05.205 fused_ordering(587) 00:18:05.205 fused_ordering(588) 00:18:05.205 fused_ordering(589) 00:18:05.205 fused_ordering(590) 00:18:05.205 fused_ordering(591) 00:18:05.205 fused_ordering(592) 00:18:05.205 fused_ordering(593) 00:18:05.205 fused_ordering(594) 00:18:05.205 fused_ordering(595) 00:18:05.205 fused_ordering(596) 00:18:05.205 fused_ordering(597) 00:18:05.205 fused_ordering(598) 00:18:05.205 fused_ordering(599) 00:18:05.205 fused_ordering(600) 00:18:05.205 fused_ordering(601) 00:18:05.205 fused_ordering(602) 00:18:05.205 fused_ordering(603) 00:18:05.205 fused_ordering(604) 00:18:05.205 fused_ordering(605) 00:18:05.205 fused_ordering(606) 00:18:05.205 fused_ordering(607) 00:18:05.205 fused_ordering(608) 00:18:05.205 fused_ordering(609) 00:18:05.205 fused_ordering(610) 00:18:05.205 fused_ordering(611) 00:18:05.205 fused_ordering(612) 00:18:05.205 fused_ordering(613) 00:18:05.205 fused_ordering(614) 00:18:05.205 fused_ordering(615) 00:18:05.774 fused_ordering(616) 00:18:05.774 fused_ordering(617) 00:18:05.774 fused_ordering(618) 00:18:05.774 fused_ordering(619) 00:18:05.774 fused_ordering(620) 00:18:05.774 fused_ordering(621) 00:18:05.774 fused_ordering(622) 00:18:05.774 fused_ordering(623) 00:18:05.774 fused_ordering(624) 00:18:05.774 fused_ordering(625) 00:18:05.774 fused_ordering(626) 00:18:05.774 fused_ordering(627) 00:18:05.774 fused_ordering(628) 00:18:05.774 fused_ordering(629) 00:18:05.774 fused_ordering(630) 00:18:05.774 fused_ordering(631) 00:18:05.774 fused_ordering(632) 00:18:05.774 fused_ordering(633) 00:18:05.774 fused_ordering(634) 00:18:05.774 fused_ordering(635) 00:18:05.774 fused_ordering(636) 00:18:05.774 fused_ordering(637) 00:18:05.774 fused_ordering(638) 00:18:05.774 fused_ordering(639) 00:18:05.774 fused_ordering(640) 00:18:05.774 fused_ordering(641) 00:18:05.774 fused_ordering(642) 00:18:05.774 fused_ordering(643) 00:18:05.774 fused_ordering(644) 00:18:05.774 fused_ordering(645) 00:18:05.774 fused_ordering(646) 00:18:05.774 fused_ordering(647) 00:18:05.774 fused_ordering(648) 00:18:05.774 fused_ordering(649) 00:18:05.774 fused_ordering(650) 00:18:05.774 fused_ordering(651) 00:18:05.774 fused_ordering(652) 00:18:05.774 fused_ordering(653) 00:18:05.774 fused_ordering(654) 00:18:05.774 fused_ordering(655) 00:18:05.774 fused_ordering(656) 00:18:05.774 fused_ordering(657) 00:18:05.774 fused_ordering(658) 00:18:05.774 fused_ordering(659) 00:18:05.774 fused_ordering(660) 00:18:05.774 fused_ordering(661) 00:18:05.774 fused_ordering(662) 00:18:05.774 fused_ordering(663) 00:18:05.774 fused_ordering(664) 00:18:05.774 fused_ordering(665) 00:18:05.774 fused_ordering(666) 00:18:05.774 fused_ordering(667) 00:18:05.774 fused_ordering(668) 00:18:05.774 fused_ordering(669) 00:18:05.774 fused_ordering(670) 00:18:05.774 fused_ordering(671) 00:18:05.774 fused_ordering(672) 00:18:05.774 fused_ordering(673) 00:18:05.774 fused_ordering(674) 00:18:05.774 fused_ordering(675) 00:18:05.775 fused_ordering(676) 00:18:05.775 fused_ordering(677) 00:18:05.775 fused_ordering(678) 00:18:05.775 fused_ordering(679) 00:18:05.775 fused_ordering(680) 00:18:05.775 fused_ordering(681) 00:18:05.775 fused_ordering(682) 00:18:05.775 fused_ordering(683) 00:18:05.775 fused_ordering(684) 00:18:05.775 fused_ordering(685) 00:18:05.775 fused_ordering(686) 00:18:05.775 fused_ordering(687) 00:18:05.775 fused_ordering(688) 00:18:05.775 fused_ordering(689) 00:18:05.775 fused_ordering(690) 00:18:05.775 fused_ordering(691) 00:18:05.775 fused_ordering(692) 00:18:05.775 fused_ordering(693) 00:18:05.775 fused_ordering(694) 00:18:05.775 fused_ordering(695) 00:18:05.775 fused_ordering(696) 00:18:05.775 fused_ordering(697) 00:18:05.775 fused_ordering(698) 00:18:05.775 fused_ordering(699) 00:18:05.775 fused_ordering(700) 00:18:05.775 fused_ordering(701) 00:18:05.775 fused_ordering(702) 00:18:05.775 fused_ordering(703) 00:18:05.775 fused_ordering(704) 00:18:05.775 fused_ordering(705) 00:18:05.775 fused_ordering(706) 00:18:05.775 fused_ordering(707) 00:18:05.775 fused_ordering(708) 00:18:05.775 fused_ordering(709) 00:18:05.775 fused_ordering(710) 00:18:05.775 fused_ordering(711) 00:18:05.775 fused_ordering(712) 00:18:05.775 fused_ordering(713) 00:18:05.775 fused_ordering(714) 00:18:05.775 fused_ordering(715) 00:18:05.775 fused_ordering(716) 00:18:05.775 fused_ordering(717) 00:18:05.775 fused_ordering(718) 00:18:05.775 fused_ordering(719) 00:18:05.775 fused_ordering(720) 00:18:05.775 fused_ordering(721) 00:18:05.775 fused_ordering(722) 00:18:05.775 fused_ordering(723) 00:18:05.775 fused_ordering(724) 00:18:05.775 fused_ordering(725) 00:18:05.775 fused_ordering(726) 00:18:05.775 fused_ordering(727) 00:18:05.775 fused_ordering(728) 00:18:05.775 fused_ordering(729) 00:18:05.775 fused_ordering(730) 00:18:05.775 fused_ordering(731) 00:18:05.775 fused_ordering(732) 00:18:05.775 fused_ordering(733) 00:18:05.775 fused_ordering(734) 00:18:05.775 fused_ordering(735) 00:18:05.775 fused_ordering(736) 00:18:05.775 fused_ordering(737) 00:18:05.775 fused_ordering(738) 00:18:05.775 fused_ordering(739) 00:18:05.775 fused_ordering(740) 00:18:05.775 fused_ordering(741) 00:18:05.775 fused_ordering(742) 00:18:05.775 fused_ordering(743) 00:18:05.775 fused_ordering(744) 00:18:05.775 fused_ordering(745) 00:18:05.775 fused_ordering(746) 00:18:05.775 fused_ordering(747) 00:18:05.775 fused_ordering(748) 00:18:05.775 fused_ordering(749) 00:18:05.775 fused_ordering(750) 00:18:05.775 fused_ordering(751) 00:18:05.775 fused_ordering(752) 00:18:05.775 fused_ordering(753) 00:18:05.775 fused_ordering(754) 00:18:05.775 fused_ordering(755) 00:18:05.775 fused_ordering(756) 00:18:05.775 fused_ordering(757) 00:18:05.775 fused_ordering(758) 00:18:05.775 fused_ordering(759) 00:18:05.775 fused_ordering(760) 00:18:05.775 fused_ordering(761) 00:18:05.775 fused_ordering(762) 00:18:05.775 fused_ordering(763) 00:18:05.775 fused_ordering(764) 00:18:05.775 fused_ordering(765) 00:18:05.775 fused_ordering(766) 00:18:05.775 fused_ordering(767) 00:18:05.775 fused_ordering(768) 00:18:05.775 fused_ordering(769) 00:18:05.775 fused_ordering(770) 00:18:05.775 fused_ordering(771) 00:18:05.775 fused_ordering(772) 00:18:05.775 fused_ordering(773) 00:18:05.775 fused_ordering(774) 00:18:05.775 fused_ordering(775) 00:18:05.775 fused_ordering(776) 00:18:05.775 fused_ordering(777) 00:18:05.775 fused_ordering(778) 00:18:05.775 fused_ordering(779) 00:18:05.775 fused_ordering(780) 00:18:05.775 fused_ordering(781) 00:18:05.775 fused_ordering(782) 00:18:05.775 fused_ordering(783) 00:18:05.775 fused_ordering(784) 00:18:05.775 fused_ordering(785) 00:18:05.775 fused_ordering(786) 00:18:05.775 fused_ordering(787) 00:18:05.775 fused_ordering(788) 00:18:05.775 fused_ordering(789) 00:18:05.775 fused_ordering(790) 00:18:05.775 fused_ordering(791) 00:18:05.775 fused_ordering(792) 00:18:05.775 fused_ordering(793) 00:18:05.775 fused_ordering(794) 00:18:05.775 fused_ordering(795) 00:18:05.775 fused_ordering(796) 00:18:05.775 fused_ordering(797) 00:18:05.775 fused_ordering(798) 00:18:05.775 fused_ordering(799) 00:18:05.775 fused_ordering(800) 00:18:05.775 fused_ordering(801) 00:18:05.775 fused_ordering(802) 00:18:05.775 fused_ordering(803) 00:18:05.775 fused_ordering(804) 00:18:05.775 fused_ordering(805) 00:18:05.775 fused_ordering(806) 00:18:05.775 fused_ordering(807) 00:18:05.775 fused_ordering(808) 00:18:05.775 fused_ordering(809) 00:18:05.775 fused_ordering(810) 00:18:05.775 fused_ordering(811) 00:18:05.775 fused_ordering(812) 00:18:05.775 fused_ordering(813) 00:18:05.775 fused_ordering(814) 00:18:05.775 fused_ordering(815) 00:18:05.775 fused_ordering(816) 00:18:05.775 fused_ordering(817) 00:18:05.775 fused_ordering(818) 00:18:05.775 fused_ordering(819) 00:18:05.775 fused_ordering(820) 00:18:06.344 fused_ordering(821) 00:18:06.344 fused_ordering(822) 00:18:06.344 fused_ordering(823) 00:18:06.344 fused_ordering(824) 00:18:06.344 fused_ordering(825) 00:18:06.344 fused_ordering(826) 00:18:06.344 fused_ordering(827) 00:18:06.344 fused_ordering(828) 00:18:06.344 fused_ordering(829) 00:18:06.344 fused_ordering(830) 00:18:06.344 fused_ordering(831) 00:18:06.344 fused_ordering(832) 00:18:06.344 fused_ordering(833) 00:18:06.344 fused_ordering(834) 00:18:06.344 fused_ordering(835) 00:18:06.344 fused_ordering(836) 00:18:06.344 fused_ordering(837) 00:18:06.344 fused_ordering(838) 00:18:06.344 fused_ordering(839) 00:18:06.344 fused_ordering(840) 00:18:06.344 fused_ordering(841) 00:18:06.344 fused_ordering(842) 00:18:06.344 fused_ordering(843) 00:18:06.344 fused_ordering(844) 00:18:06.344 fused_ordering(845) 00:18:06.344 fused_ordering(846) 00:18:06.344 fused_ordering(847) 00:18:06.344 fused_ordering(848) 00:18:06.344 fused_ordering(849) 00:18:06.344 fused_ordering(850) 00:18:06.344 fused_ordering(851) 00:18:06.344 fused_ordering(852) 00:18:06.344 fused_ordering(853) 00:18:06.344 fused_ordering(854) 00:18:06.344 fused_ordering(855) 00:18:06.344 fused_ordering(856) 00:18:06.344 fused_ordering(857) 00:18:06.344 fused_ordering(858) 00:18:06.344 fused_ordering(859) 00:18:06.344 fused_ordering(860) 00:18:06.344 fused_ordering(861) 00:18:06.344 fused_ordering(862) 00:18:06.344 fused_ordering(863) 00:18:06.344 fused_ordering(864) 00:18:06.344 fused_ordering(865) 00:18:06.344 fused_ordering(866) 00:18:06.344 fused_ordering(867) 00:18:06.344 fused_ordering(868) 00:18:06.344 fused_ordering(869) 00:18:06.344 fused_ordering(870) 00:18:06.344 fused_ordering(871) 00:18:06.344 fused_ordering(872) 00:18:06.344 fused_ordering(873) 00:18:06.344 fused_ordering(874) 00:18:06.344 fused_ordering(875) 00:18:06.344 fused_ordering(876) 00:18:06.344 fused_ordering(877) 00:18:06.344 fused_ordering(878) 00:18:06.344 fused_ordering(879) 00:18:06.344 fused_ordering(880) 00:18:06.344 fused_ordering(881) 00:18:06.344 fused_ordering(882) 00:18:06.344 fused_ordering(883) 00:18:06.344 fused_ordering(884) 00:18:06.344 fused_ordering(885) 00:18:06.344 fused_ordering(886) 00:18:06.344 fused_ordering(887) 00:18:06.344 fused_ordering(888) 00:18:06.344 fused_ordering(889) 00:18:06.344 fused_ordering(890) 00:18:06.344 fused_ordering(891) 00:18:06.344 fused_ordering(892) 00:18:06.344 fused_ordering(893) 00:18:06.344 fused_ordering(894) 00:18:06.344 fused_ordering(895) 00:18:06.344 fused_ordering(896) 00:18:06.344 fused_ordering(897) 00:18:06.344 fused_ordering(898) 00:18:06.344 fused_ordering(899) 00:18:06.344 fused_ordering(900) 00:18:06.344 fused_ordering(901) 00:18:06.344 fused_ordering(902) 00:18:06.344 fused_ordering(903) 00:18:06.344 fused_ordering(904) 00:18:06.344 fused_ordering(905) 00:18:06.344 fused_ordering(906) 00:18:06.344 fused_ordering(907) 00:18:06.344 fused_ordering(908) 00:18:06.344 fused_ordering(909) 00:18:06.345 fused_ordering(910) 00:18:06.345 fused_ordering(911) 00:18:06.345 fused_ordering(912) 00:18:06.345 fused_ordering(913) 00:18:06.345 fused_ordering(914) 00:18:06.345 fused_ordering(915) 00:18:06.345 fused_ordering(916) 00:18:06.345 fused_ordering(917) 00:18:06.345 fused_ordering(918) 00:18:06.345 fused_ordering(919) 00:18:06.345 fused_ordering(920) 00:18:06.345 fused_ordering(921) 00:18:06.345 fused_ordering(922) 00:18:06.345 fused_ordering(923) 00:18:06.345 fused_ordering(924) 00:18:06.345 fused_ordering(925) 00:18:06.345 fused_ordering(926) 00:18:06.345 fused_ordering(927) 00:18:06.345 fused_ordering(928) 00:18:06.345 fused_ordering(929) 00:18:06.345 fused_ordering(930) 00:18:06.345 fused_ordering(931) 00:18:06.345 fused_ordering(932) 00:18:06.345 fused_ordering(933) 00:18:06.345 fused_ordering(934) 00:18:06.345 fused_ordering(935) 00:18:06.345 fused_ordering(936) 00:18:06.345 fused_ordering(937) 00:18:06.345 fused_ordering(938) 00:18:06.345 fused_ordering(939) 00:18:06.345 fused_ordering(940) 00:18:06.345 fused_ordering(941) 00:18:06.345 fused_ordering(942) 00:18:06.345 fused_ordering(943) 00:18:06.345 fused_ordering(944) 00:18:06.345 fused_ordering(945) 00:18:06.345 fused_ordering(946) 00:18:06.345 fused_ordering(947) 00:18:06.345 fused_ordering(948) 00:18:06.345 fused_ordering(949) 00:18:06.345 fused_ordering(950) 00:18:06.345 fused_ordering(951) 00:18:06.345 fused_ordering(952) 00:18:06.345 fused_ordering(953) 00:18:06.345 fused_ordering(954) 00:18:06.345 fused_ordering(955) 00:18:06.345 fused_ordering(956) 00:18:06.345 fused_ordering(957) 00:18:06.345 fused_ordering(958) 00:18:06.345 fused_ordering(959) 00:18:06.345 fused_ordering(960) 00:18:06.345 fused_ordering(961) 00:18:06.345 fused_ordering(962) 00:18:06.345 fused_ordering(963) 00:18:06.345 fused_ordering(964) 00:18:06.345 fused_ordering(965) 00:18:06.345 fused_ordering(966) 00:18:06.345 fused_ordering(967) 00:18:06.345 fused_ordering(968) 00:18:06.345 fused_ordering(969) 00:18:06.345 fused_ordering(970) 00:18:06.345 fused_ordering(971) 00:18:06.345 fused_ordering(972) 00:18:06.345 fused_ordering(973) 00:18:06.345 fused_ordering(974) 00:18:06.345 fused_ordering(975) 00:18:06.345 fused_ordering(976) 00:18:06.345 fused_ordering(977) 00:18:06.345 fused_ordering(978) 00:18:06.345 fused_ordering(979) 00:18:06.345 fused_ordering(980) 00:18:06.345 fused_ordering(981) 00:18:06.345 fused_ordering(982) 00:18:06.345 fused_ordering(983) 00:18:06.345 fused_ordering(984) 00:18:06.345 fused_ordering(985) 00:18:06.345 fused_ordering(986) 00:18:06.345 fused_ordering(987) 00:18:06.345 fused_ordering(988) 00:18:06.345 fused_ordering(989) 00:18:06.345 fused_ordering(990) 00:18:06.345 fused_ordering(991) 00:18:06.345 fused_ordering(992) 00:18:06.345 fused_ordering(993) 00:18:06.345 fused_ordering(994) 00:18:06.345 fused_ordering(995) 00:18:06.345 fused_ordering(996) 00:18:06.345 fused_ordering(997) 00:18:06.345 fused_ordering(998) 00:18:06.345 fused_ordering(999) 00:18:06.345 fused_ordering(1000) 00:18:06.345 fused_ordering(1001) 00:18:06.345 fused_ordering(1002) 00:18:06.345 fused_ordering(1003) 00:18:06.345 fused_ordering(1004) 00:18:06.345 fused_ordering(1005) 00:18:06.345 fused_ordering(1006) 00:18:06.345 fused_ordering(1007) 00:18:06.345 fused_ordering(1008) 00:18:06.345 fused_ordering(1009) 00:18:06.345 fused_ordering(1010) 00:18:06.345 fused_ordering(1011) 00:18:06.345 fused_ordering(1012) 00:18:06.345 fused_ordering(1013) 00:18:06.345 fused_ordering(1014) 00:18:06.345 fused_ordering(1015) 00:18:06.345 fused_ordering(1016) 00:18:06.345 fused_ordering(1017) 00:18:06.345 fused_ordering(1018) 00:18:06.345 fused_ordering(1019) 00:18:06.345 fused_ordering(1020) 00:18:06.345 fused_ordering(1021) 00:18:06.345 fused_ordering(1022) 00:18:06.345 fused_ordering(1023) 00:18:06.345 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:06.345 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:06.345 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:06.345 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:18:06.345 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:06.345 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:18:06.345 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:06.345 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:06.345 rmmod nvme_tcp 00:18:06.345 rmmod nvme_fabrics 00:18:06.345 rmmod nvme_keyring 00:18:06.345 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:06.345 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:18:06.345 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:18:06.345 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 264558 ']' 00:18:06.345 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 264558 00:18:06.345 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 264558 ']' 00:18:06.345 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 264558 00:18:06.345 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:18:06.345 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:06.345 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 264558 00:18:06.345 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:06.345 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:06.345 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 264558' 00:18:06.346 killing process with pid 264558 00:18:06.346 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 264558 00:18:06.346 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 264558 00:18:06.605 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:06.605 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:06.605 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:06.605 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:06.605 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:06.605 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.605 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:06.605 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:09.142 00:18:09.142 real 0m12.796s 00:18:09.142 user 0m6.545s 00:18:09.142 sys 0m7.291s 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:09.142 ************************************ 00:18:09.142 END TEST nvmf_fused_ordering 00:18:09.142 ************************************ 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:09.142 ************************************ 00:18:09.142 START TEST nvmf_ns_masking 00:18:09.142 ************************************ 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:09.142 * Looking for test storage... 00:18:09.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.142 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=9a034e88-8c39-47e3-a3d3-ad9be70d4ca7 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=666cb3da-3ccf-44c3-be8d-4bdb9fac20c3 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=a75579db-6eb8-45bc-a7f4-bc9d7b81e41c 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:18:09.143 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:15.717 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:15.717 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:15.718 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:15.718 Found net devices under 0000:af:00.0: cvl_0_0 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:15.718 Found net devices under 0000:af:00.1: cvl_0_1 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:15.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:15.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:18:15.718 00:18:15.718 --- 10.0.0.2 ping statistics --- 00:18:15.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.718 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:15.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:15.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:18:15.718 00:18:15.718 --- 10.0.0.1 ping statistics --- 00:18:15.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.718 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=268782 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 268782 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 268782 ']' 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:15.718 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:15.718 [2024-07-25 13:46:11.806285] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:18:15.718 [2024-07-25 13:46:11.806335] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.718 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.718 [2024-07-25 13:46:11.847118] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:15.718 [2024-07-25 13:46:11.881853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.718 [2024-07-25 13:46:11.920528] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.718 [2024-07-25 13:46:11.920569] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.718 [2024-07-25 13:46:11.920578] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.718 [2024-07-25 13:46:11.920586] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.718 [2024-07-25 13:46:11.920594] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.718 [2024-07-25 13:46:11.920613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.718 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:15.718 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:18:15.718 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:15.718 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:15.718 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:16.014 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.014 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:16.014 [2024-07-25 13:46:12.794911] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.014 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:16.014 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:16.014 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:16.273 Malloc1 00:18:16.273 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:16.273 Malloc2 00:18:16.533 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:16.533 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:16.791 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:16.791 [2024-07-25 13:46:13.653661] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.791 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:16.791 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a75579db-6eb8-45bc-a7f4-bc9d7b81e41c -a 10.0.0.2 -s 4420 -i 4 00:18:17.050 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:17.050 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:17.050 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:17.050 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:17.050 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:18.957 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:18.957 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:18.957 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:18.957 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:18.957 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:18.957 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:18.957 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:18.957 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:19.217 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:19.217 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:19.217 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:19.217 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:19.217 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:19.217 [ 0]:0x1 00:18:19.217 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:19.217 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:19.217 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7a63cf3cc88e47b298b3bfeeff3f4073 00:18:19.217 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7a63cf3cc88e47b298b3bfeeff3f4073 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:19.217 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:19.476 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:19.476 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:19.476 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:19.476 [ 0]:0x1 00:18:19.476 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:19.476 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:19.476 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7a63cf3cc88e47b298b3bfeeff3f4073 00:18:19.476 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7a63cf3cc88e47b298b3bfeeff3f4073 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:19.476 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:19.476 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:19.476 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:19.476 [ 1]:0x2 00:18:19.476 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:19.476 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:19.476 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4cdb8d047b72480b83c73f77c3d660f5 00:18:19.476 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4cdb8d047b72480b83c73f77c3d660f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:19.477 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:19.477 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:19.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:19.736 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:19.995 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:19.995 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:19.995 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a75579db-6eb8-45bc-a7f4-bc9d7b81e41c -a 10.0.0.2 -s 4420 -i 4 00:18:20.254 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:20.254 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:20.254 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:20.254 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:18:20.254 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:18:20.254 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:22.160 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:22.420 [ 0]:0x2 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:22.420 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:22.680 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4cdb8d047b72480b83c73f77c3d660f5 00:18:22.680 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4cdb8d047b72480b83c73f77c3d660f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:22.680 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:22.680 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:22.680 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:22.680 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:22.680 [ 0]:0x1 00:18:22.680 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:22.680 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:22.680 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7a63cf3cc88e47b298b3bfeeff3f4073 00:18:22.680 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7a63cf3cc88e47b298b3bfeeff3f4073 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:22.680 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:22.680 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:22.680 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:22.680 [ 1]:0x2 00:18:22.939 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:22.939 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:22.939 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4cdb8d047b72480b83c73f77c3d660f5 00:18:22.939 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4cdb8d047b72480b83c73f77c3d660f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:22.939 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:22.939 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:22.939 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:22.939 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:22.939 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:22.939 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.939 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:22.939 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.939 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:22.939 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:22.939 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:22.939 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:22.939 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:22.939 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:22.939 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:22.939 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:22.939 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:22.940 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:22.940 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:22.940 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:22.940 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:22.940 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:23.199 [ 0]:0x2 00:18:23.199 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:23.199 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:23.199 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4cdb8d047b72480b83c73f77c3d660f5 00:18:23.199 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4cdb8d047b72480b83c73f77c3d660f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:23.199 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:23.199 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:23.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:23.199 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:23.458 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:23.458 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a75579db-6eb8-45bc-a7f4-bc9d7b81e41c -a 10.0.0.2 -s 4420 -i 4 00:18:23.458 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:23.458 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:23.458 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:23.458 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:23.458 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:23.458 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:25.993 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:25.993 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:25.993 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:25.993 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:25.993 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:25.993 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:25.993 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:25.993 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:25.993 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:25.993 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:25.993 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:25.993 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:25.993 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:25.993 [ 0]:0x1 00:18:25.993 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:25.993 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:25.993 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7a63cf3cc88e47b298b3bfeeff3f4073 00:18:25.993 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7a63cf3cc88e47b298b3bfeeff3f4073 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:25.994 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:25.994 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:25.994 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:25.994 [ 1]:0x2 00:18:25.994 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:25.994 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:25.994 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4cdb8d047b72480b83c73f77c3d660f5 00:18:25.994 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4cdb8d047b72480b83c73f77c3d660f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:25.994 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:25.994 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:25.994 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:25.994 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:25.994 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:25.994 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:25.994 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:25.994 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:25.994 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:25.994 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:25.994 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:25.994 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:25.994 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:26.254 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:26.254 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:26.254 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:26.254 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:26.254 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:26.254 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:26.254 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:26.254 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:26.254 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:26.254 [ 0]:0x2 00:18:26.254 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:26.254 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:26.254 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4cdb8d047b72480b83c73f77c3d660f5 00:18:26.254 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4cdb8d047b72480b83c73f77c3d660f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:26.254 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:26.254 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:26.254 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:26.254 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:26.254 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:26.254 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:26.254 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:26.254 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:26.254 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:26.254 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:26.254 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:26.254 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:26.254 [2024-07-25 13:46:23.108288] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:26.254 request: 00:18:26.254 { 00:18:26.254 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.254 "nsid": 2, 00:18:26.254 "host": "nqn.2016-06.io.spdk:host1", 00:18:26.254 "method": "nvmf_ns_remove_host", 00:18:26.254 "req_id": 1 00:18:26.254 } 00:18:26.254 Got JSON-RPC error response 00:18:26.254 response: 00:18:26.254 { 00:18:26.254 "code": -32602, 00:18:26.254 "message": "Invalid parameters" 00:18:26.254 } 00:18:26.254 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:26.254 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:26.254 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:26.254 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:26.254 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:26.254 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:26.254 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:26.254 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:26.254 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:26.254 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:26.254 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:26.254 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:26.254 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:26.254 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:26.254 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:26.254 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:26.514 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:26.514 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:26.514 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:26.514 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:26.514 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:26.514 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:26.514 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:26.514 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:26.514 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:26.514 [ 0]:0x2 00:18:26.514 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:26.514 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:26.514 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4cdb8d047b72480b83c73f77c3d660f5 00:18:26.514 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4cdb8d047b72480b83c73f77c3d660f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:26.514 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:26.514 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:26.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:26.514 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=270799 00:18:26.514 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:26.514 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 270799 /var/tmp/host.sock 00:18:26.514 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 270799 ']' 00:18:26.514 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:26.514 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:26.514 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:26.514 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:26.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:26.514 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:26.514 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:26.514 [2024-07-25 13:46:23.302340] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:18:26.514 [2024-07-25 13:46:23.302390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid270799 ] 00:18:26.514 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.514 [2024-07-25 13:46:23.337956] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:26.514 [2024-07-25 13:46:23.373055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.773 [2024-07-25 13:46:23.411164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.342 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:27.342 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:18:27.342 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:27.602 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:27.602 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 9a034e88-8c39-47e3-a3d3-ad9be70d4ca7 00:18:27.602 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:18:27.602 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 9A034E888C3947E3A3D3AD9BE70D4CA7 -i 00:18:27.862 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 666cb3da-3ccf-44c3-be8d-4bdb9fac20c3 00:18:27.862 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:18:27.862 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 666CB3DA3CCF44C3BE8D4BDB9FAC20C3 -i 00:18:28.121 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:28.121 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:28.381 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:28.381 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:28.640 nvme0n1 00:18:28.640 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:28.640 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:28.899 nvme1n2 00:18:28.899 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:28.899 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:28.899 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:28.899 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:28.899 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:29.158 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:29.158 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:29.158 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:29.158 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:29.158 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 9a034e88-8c39-47e3-a3d3-ad9be70d4ca7 == \9\a\0\3\4\e\8\8\-\8\c\3\9\-\4\7\e\3\-\a\3\d\3\-\a\d\9\b\e\7\0\d\4\c\a\7 ]] 00:18:29.158 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:29.158 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:29.158 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:29.418 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 666cb3da-3ccf-44c3-be8d-4bdb9fac20c3 == \6\6\6\c\b\3\d\a\-\3\c\c\f\-\4\4\c\3\-\b\e\8\d\-\4\b\d\b\9\f\a\c\2\0\c\3 ]] 00:18:29.418 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 270799 00:18:29.418 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 270799 ']' 00:18:29.418 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 270799 00:18:29.418 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:18:29.418 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:29.418 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 270799 00:18:29.418 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:29.418 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:29.418 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 270799' 00:18:29.418 killing process with pid 270799 00:18:29.418 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 270799 00:18:29.418 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 270799 00:18:29.677 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:29.937 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:18:29.937 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:18:29.937 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:29.937 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:18:29.937 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:29.937 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:18:29.937 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:29.937 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:29.937 rmmod nvme_tcp 00:18:29.937 rmmod nvme_fabrics 00:18:29.937 rmmod nvme_keyring 00:18:29.937 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:29.937 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:18:29.937 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:18:29.937 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 268782 ']' 00:18:29.937 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 268782 00:18:29.937 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 268782 ']' 00:18:29.937 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 268782 00:18:29.937 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:18:29.937 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:29.937 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 268782 00:18:29.937 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:29.937 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:29.937 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 268782' 00:18:29.937 killing process with pid 268782 00:18:29.937 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 268782 00:18:29.937 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 268782 00:18:30.196 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:30.196 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:30.196 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:30.196 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:30.196 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:30.196 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.196 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:30.196 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:32.754 00:18:32.754 real 0m23.542s 00:18:32.754 user 0m23.545s 00:18:32.754 sys 0m7.717s 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:32.754 ************************************ 00:18:32.754 END TEST nvmf_ns_masking 00:18:32.754 ************************************ 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:32.754 ************************************ 00:18:32.754 START TEST nvmf_nvme_cli 00:18:32.754 ************************************ 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:32.754 * Looking for test storage... 00:18:32.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:32.754 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:18:32.755 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:39.324 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:39.324 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:39.324 Found net devices under 0000:af:00.0: cvl_0_0 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:39.324 Found net devices under 0000:af:00.1: cvl_0_1 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:39.324 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:39.325 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:39.325 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:39.325 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:39.325 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:39.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:39.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:18:39.325 00:18:39.325 --- 10.0.0.2 ping statistics --- 00:18:39.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.325 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:18:39.325 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:39.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:39.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:18:39.325 00:18:39.325 --- 10.0.0.1 ping statistics --- 00:18:39.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.325 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:18:39.325 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:39.325 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:18:39.325 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:39.325 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:39.325 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:39.325 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:39.325 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:39.325 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:39.325 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:39.325 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:39.325 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:39.325 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:39.325 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.325 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:39.325 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=275039 00:18:39.325 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 275039 00:18:39.325 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 275039 ']' 00:18:39.325 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.325 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:39.325 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.325 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:39.325 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.325 [2024-07-25 13:46:36.121938] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:18:39.325 [2024-07-25 13:46:36.121986] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.325 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.325 [2024-07-25 13:46:36.162912] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:39.325 [2024-07-25 13:46:36.198316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:39.584 [2024-07-25 13:46:36.239605] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.584 [2024-07-25 13:46:36.239644] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.584 [2024-07-25 13:46:36.239654] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.584 [2024-07-25 13:46:36.239664] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.584 [2024-07-25 13:46:36.239672] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.584 [2024-07-25 13:46:36.239727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.584 [2024-07-25 13:46:36.239789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.584 [2024-07-25 13:46:36.239892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:39.584 [2024-07-25 13:46:36.239895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.584 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:39.584 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:18:39.584 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:39.584 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:39.584 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.584 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.584 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:39.584 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.584 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.584 [2024-07-25 13:46:36.394225] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.584 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.584 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:39.584 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.584 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.584 Malloc0 00:18:39.584 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.584 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:39.584 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.584 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.584 Malloc1 00:18:39.584 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.584 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:39.584 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.584 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.584 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.584 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:39.584 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.584 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.584 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.585 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:39.585 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.585 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.844 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.844 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:39.844 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.844 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.844 [2024-07-25 13:46:36.478579] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.844 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.844 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:39.844 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.844 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.844 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.844 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:18:39.844 00:18:39.844 Discovery Log Number of Records 2, Generation counter 2 00:18:39.844 =====Discovery Log Entry 0====== 00:18:39.844 trtype: tcp 00:18:39.844 adrfam: ipv4 00:18:39.844 subtype: current discovery subsystem 00:18:39.844 treq: not required 00:18:39.844 portid: 0 00:18:39.844 trsvcid: 4420 00:18:39.844 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:39.844 traddr: 10.0.0.2 00:18:39.844 eflags: explicit discovery connections, duplicate discovery information 00:18:39.844 sectype: none 00:18:39.844 =====Discovery Log Entry 1====== 00:18:39.844 trtype: tcp 00:18:39.844 adrfam: ipv4 00:18:39.844 subtype: nvme subsystem 00:18:39.844 treq: not required 00:18:39.844 portid: 0 00:18:39.844 trsvcid: 4420 00:18:39.844 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:39.844 traddr: 10.0.0.2 00:18:39.844 eflags: none 00:18:39.844 sectype: none 00:18:39.844 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:39.844 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:39.844 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:18:39.844 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:39.844 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:18:39.844 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:18:39.844 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:39.844 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:18:39.844 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:39.844 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:39.844 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:41.224 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:41.224 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:18:41.224 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:41.224 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:41.224 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:41.224 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:18:43.129 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:43.129 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:43.129 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:43.129 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:43.129 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:43.129 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:18:43.129 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:43.129 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:18:43.129 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:43.129 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:18:43.389 /dev/nvme0n1 ]] 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:43.389 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:43.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:43.648 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:43.648 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:18:43.648 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:43.648 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:43.648 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:43.648 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:43.648 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:18:43.648 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:43.648 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:43.648 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.648 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:43.648 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.648 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:43.648 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:43.648 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:43.648 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:18:43.648 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:43.648 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:18:43.648 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:43.648 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:43.907 rmmod nvme_tcp 00:18:43.907 rmmod nvme_fabrics 00:18:43.907 rmmod nvme_keyring 00:18:43.907 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:43.907 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:18:43.907 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:18:43.907 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 275039 ']' 00:18:43.907 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 275039 00:18:43.907 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 275039 ']' 00:18:43.907 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 275039 00:18:43.907 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:18:43.907 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:43.907 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 275039 00:18:43.907 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:43.907 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:43.907 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 275039' 00:18:43.907 killing process with pid 275039 00:18:43.907 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 275039 00:18:43.907 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 275039 00:18:44.166 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:44.166 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:44.166 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:44.166 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:44.166 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:44.166 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.166 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.167 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.074 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:46.074 00:18:46.074 real 0m13.801s 00:18:46.074 user 0m19.963s 00:18:46.074 sys 0m5.961s 00:18:46.074 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:46.074 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:46.074 ************************************ 00:18:46.074 END TEST nvmf_nvme_cli 00:18:46.074 ************************************ 00:18:46.334 13:46:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:46.334 13:46:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:46.334 13:46:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:46.334 13:46:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:46.334 13:46:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:46.334 ************************************ 00:18:46.334 START TEST nvmf_vfio_user 00:18:46.334 ************************************ 00:18:46.334 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:46.334 * Looking for test storage... 00:18:46.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:46.334 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.334 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:46.334 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.334 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.334 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.334 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.334 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.334 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.334 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.334 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.334 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.334 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.334 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:46.334 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:18:46.334 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.334 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.334 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:46.334 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.334 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:46.334 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.334 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.334 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.334 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.334 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=276479 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 276479' 00:18:46.335 Process pid: 276479 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 276479 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 276479 ']' 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:46.335 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:46.335 [2024-07-25 13:46:43.211390] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:18:46.335 [2024-07-25 13:46:43.211443] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.594 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.594 [2024-07-25 13:46:43.248002] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:46.594 [2024-07-25 13:46:43.282526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:46.594 [2024-07-25 13:46:43.321768] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.594 [2024-07-25 13:46:43.321810] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.594 [2024-07-25 13:46:43.321819] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.594 [2024-07-25 13:46:43.321828] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.594 [2024-07-25 13:46:43.321836] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.594 [2024-07-25 13:46:43.321882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.594 [2024-07-25 13:46:43.321976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.594 [2024-07-25 13:46:43.322065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:46.594 [2024-07-25 13:46:43.322066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.163 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:47.163 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:18:47.163 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:48.543 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:48.543 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:48.543 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:48.543 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:48.543 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:48.543 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:48.543 Malloc1 00:18:48.543 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:48.801 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:49.060 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:49.060 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:49.060 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:49.320 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:49.320 Malloc2 00:18:49.320 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:49.579 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:49.839 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:49.839 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:49.839 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:49.839 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:49.839 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:49.839 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:49.839 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:49.839 [2024-07-25 13:46:46.722516] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:18:49.839 [2024-07-25 13:46:46.722555] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid277036 ] 00:18:50.100 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.100 [2024-07-25 13:46:46.738230] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:50.100 [2024-07-25 13:46:46.754076] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:50.100 [2024-07-25 13:46:46.762068] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:50.100 [2024-07-25 13:46:46.762090] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff68af3a000 00:18:50.100 [2024-07-25 13:46:46.763068] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:50.100 [2024-07-25 13:46:46.764068] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:50.100 [2024-07-25 13:46:46.765073] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:50.100 [2024-07-25 13:46:46.766074] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:50.100 [2024-07-25 13:46:46.767076] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:50.100 [2024-07-25 13:46:46.768080] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:50.100 [2024-07-25 13:46:46.769087] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:50.100 [2024-07-25 13:46:46.770089] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:50.100 [2024-07-25 13:46:46.771099] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:50.100 [2024-07-25 13:46:46.771111] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff689cff000 00:18:50.100 [2024-07-25 13:46:46.772003] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:50.100 [2024-07-25 13:46:46.785301] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:50.100 [2024-07-25 13:46:46.785323] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:18:50.100 [2024-07-25 13:46:46.788190] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:50.100 [2024-07-25 13:46:46.788230] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:50.100 [2024-07-25 13:46:46.788307] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:18:50.100 [2024-07-25 13:46:46.788325] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:18:50.100 [2024-07-25 13:46:46.788332] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:18:50.100 [2024-07-25 13:46:46.789191] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:50.100 [2024-07-25 13:46:46.789205] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:18:50.100 [2024-07-25 13:46:46.789214] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:18:50.100 [2024-07-25 13:46:46.790197] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:50.100 [2024-07-25 13:46:46.790207] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:18:50.100 [2024-07-25 13:46:46.790217] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:18:50.100 [2024-07-25 13:46:46.791200] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:50.100 [2024-07-25 13:46:46.791209] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:50.100 [2024-07-25 13:46:46.792208] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:50.100 [2024-07-25 13:46:46.792218] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:18:50.100 [2024-07-25 13:46:46.792224] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:18:50.100 [2024-07-25 13:46:46.792233] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:50.100 [2024-07-25 13:46:46.792340] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:18:50.100 [2024-07-25 13:46:46.792347] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:50.100 [2024-07-25 13:46:46.792354] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:50.100 [2024-07-25 13:46:46.793215] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:50.100 [2024-07-25 13:46:46.794224] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:50.100 [2024-07-25 13:46:46.795227] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:50.100 [2024-07-25 13:46:46.796224] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:50.100 [2024-07-25 13:46:46.796294] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:50.100 [2024-07-25 13:46:46.797239] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:50.100 [2024-07-25 13:46:46.797248] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:50.100 [2024-07-25 13:46:46.797255] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:18:50.100 [2024-07-25 13:46:46.797274] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:18:50.100 [2024-07-25 13:46:46.797288] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:18:50.100 [2024-07-25 13:46:46.797304] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:50.100 [2024-07-25 13:46:46.797310] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:50.100 [2024-07-25 13:46:46.797315] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:50.100 [2024-07-25 13:46:46.797330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:50.100 [2024-07-25 13:46:46.797368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:50.100 [2024-07-25 13:46:46.797380] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:18:50.100 [2024-07-25 13:46:46.797386] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:18:50.101 [2024-07-25 13:46:46.797392] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:18:50.101 [2024-07-25 13:46:46.797398] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:50.101 [2024-07-25 13:46:46.797404] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:18:50.101 [2024-07-25 13:46:46.797410] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:18:50.101 [2024-07-25 13:46:46.797417] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:18:50.101 [2024-07-25 13:46:46.797426] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:18:50.101 [2024-07-25 13:46:46.797440] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:50.101 [2024-07-25 13:46:46.797457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:50.101 [2024-07-25 13:46:46.797471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.101 [2024-07-25 13:46:46.797482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.101 [2024-07-25 13:46:46.797491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.101 [2024-07-25 13:46:46.797500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.101 [2024-07-25 13:46:46.797506] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:50.101 [2024-07-25 13:46:46.797519] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:50.101 [2024-07-25 13:46:46.797529] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:50.101 [2024-07-25 13:46:46.797541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:50.101 [2024-07-25 13:46:46.797549] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:18:50.101 [2024-07-25 13:46:46.797555] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:50.101 [2024-07-25 13:46:46.797566] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:18:50.101 [2024-07-25 13:46:46.797573] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:18:50.101 [2024-07-25 13:46:46.797583] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:50.101 [2024-07-25 13:46:46.797591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:50.101 [2024-07-25 13:46:46.797641] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:18:50.101 [2024-07-25 13:46:46.797650] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:18:50.101 [2024-07-25 13:46:46.797658] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:50.101 [2024-07-25 13:46:46.797664] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:50.101 [2024-07-25 13:46:46.797668] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:50.101 [2024-07-25 13:46:46.797676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:50.101 [2024-07-25 13:46:46.797686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:50.101 [2024-07-25 13:46:46.797697] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:18:50.101 [2024-07-25 13:46:46.797707] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:18:50.101 [2024-07-25 13:46:46.797721] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:18:50.101 [2024-07-25 13:46:46.797729] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:50.101 [2024-07-25 13:46:46.797735] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:50.101 [2024-07-25 13:46:46.797739] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:50.101 [2024-07-25 13:46:46.797747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:50.101 [2024-07-25 13:46:46.797765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:50.101 [2024-07-25 13:46:46.797779] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:50.101 [2024-07-25 13:46:46.797788] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:50.101 [2024-07-25 13:46:46.797796] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:50.101 [2024-07-25 13:46:46.797802] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:50.101 [2024-07-25 13:46:46.797806] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:50.101 [2024-07-25 13:46:46.797813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:50.101 [2024-07-25 13:46:46.797823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:50.101 [2024-07-25 13:46:46.797833] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:50.101 [2024-07-25 13:46:46.797841] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:18:50.101 [2024-07-25 13:46:46.797850] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:18:50.101 [2024-07-25 13:46:46.797859] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:18:50.101 [2024-07-25 13:46:46.797866] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:50.101 [2024-07-25 13:46:46.797872] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:18:50.101 [2024-07-25 13:46:46.797879] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:18:50.101 [2024-07-25 13:46:46.797884] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:18:50.101 [2024-07-25 13:46:46.797891] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:18:50.101 [2024-07-25 13:46:46.797910] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:50.101 [2024-07-25 13:46:46.797921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:50.101 [2024-07-25 13:46:46.797935] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:50.101 [2024-07-25 13:46:46.797946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:50.101 [2024-07-25 13:46:46.797959] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:50.101 [2024-07-25 13:46:46.797966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:50.101 [2024-07-25 13:46:46.797979] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:50.101 [2024-07-25 13:46:46.797989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:50.101 [2024-07-25 13:46:46.798005] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:50.101 [2024-07-25 13:46:46.798011] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:50.101 [2024-07-25 13:46:46.798015] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:50.101 [2024-07-25 13:46:46.798020] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:50.101 [2024-07-25 13:46:46.798024] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:50.101 [2024-07-25 13:46:46.798031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:50.101 [2024-07-25 13:46:46.798039] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:50.101 [2024-07-25 13:46:46.798045] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:50.101 [2024-07-25 13:46:46.798049] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:50.101 [2024-07-25 13:46:46.798056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:50.101 [2024-07-25 13:46:46.798063] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:50.101 [2024-07-25 13:46:46.798069] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:50.101 [2024-07-25 13:46:46.798074] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:50.101 [2024-07-25 13:46:46.798080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:50.101 [2024-07-25 13:46:46.798088] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:50.101 [2024-07-25 13:46:46.798094] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:50.101 [2024-07-25 13:46:46.798098] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:50.101 [2024-07-25 13:46:46.798105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:50.101 [2024-07-25 13:46:46.798113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:50.102 [2024-07-25 13:46:46.798127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:50.102 [2024-07-25 13:46:46.798140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:50.102 [2024-07-25 13:46:46.798149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:50.102 ===================================================== 00:18:50.102 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:50.102 ===================================================== 00:18:50.102 Controller Capabilities/Features 00:18:50.102 ================================ 00:18:50.102 Vendor ID: 4e58 00:18:50.102 Subsystem Vendor ID: 4e58 00:18:50.102 Serial Number: SPDK1 00:18:50.102 Model Number: SPDK bdev Controller 00:18:50.102 Firmware Version: 24.09 00:18:50.102 Recommended Arb Burst: 6 00:18:50.102 IEEE OUI Identifier: 8d 6b 50 00:18:50.102 Multi-path I/O 00:18:50.102 May have multiple subsystem ports: Yes 00:18:50.102 May have multiple controllers: Yes 00:18:50.102 Associated with SR-IOV VF: No 00:18:50.102 Max Data Transfer Size: 131072 00:18:50.102 Max Number of Namespaces: 32 00:18:50.102 Max Number of I/O Queues: 127 00:18:50.102 NVMe Specification Version (VS): 1.3 00:18:50.102 NVMe Specification Version (Identify): 1.3 00:18:50.102 Maximum Queue Entries: 256 00:18:50.102 Contiguous Queues Required: Yes 00:18:50.102 Arbitration Mechanisms Supported 00:18:50.102 Weighted Round Robin: Not Supported 00:18:50.102 Vendor Specific: Not Supported 00:18:50.102 Reset Timeout: 15000 ms 00:18:50.102 Doorbell Stride: 4 bytes 00:18:50.102 NVM Subsystem Reset: Not Supported 00:18:50.102 Command Sets Supported 00:18:50.102 NVM Command Set: Supported 00:18:50.102 Boot Partition: Not Supported 00:18:50.102 Memory Page Size Minimum: 4096 bytes 00:18:50.102 Memory Page Size Maximum: 4096 bytes 00:18:50.102 Persistent Memory Region: Not Supported 00:18:50.102 Optional Asynchronous Events Supported 00:18:50.102 Namespace Attribute Notices: Supported 00:18:50.102 Firmware Activation Notices: Not Supported 00:18:50.102 ANA Change Notices: Not Supported 00:18:50.102 PLE Aggregate Log Change Notices: Not Supported 00:18:50.102 LBA Status Info Alert Notices: Not Supported 00:18:50.102 EGE Aggregate Log Change Notices: Not Supported 00:18:50.102 Normal NVM Subsystem Shutdown event: Not Supported 00:18:50.102 Zone Descriptor Change Notices: Not Supported 00:18:50.102 Discovery Log Change Notices: Not Supported 00:18:50.102 Controller Attributes 00:18:50.102 128-bit Host Identifier: Supported 00:18:50.102 Non-Operational Permissive Mode: Not Supported 00:18:50.102 NVM Sets: Not Supported 00:18:50.102 Read Recovery Levels: Not Supported 00:18:50.102 Endurance Groups: Not Supported 00:18:50.102 Predictable Latency Mode: Not Supported 00:18:50.102 Traffic Based Keep ALive: Not Supported 00:18:50.102 Namespace Granularity: Not Supported 00:18:50.102 SQ Associations: Not Supported 00:18:50.102 UUID List: Not Supported 00:18:50.102 Multi-Domain Subsystem: Not Supported 00:18:50.102 Fixed Capacity Management: Not Supported 00:18:50.102 Variable Capacity Management: Not Supported 00:18:50.102 Delete Endurance Group: Not Supported 00:18:50.102 Delete NVM Set: Not Supported 00:18:50.102 Extended LBA Formats Supported: Not Supported 00:18:50.102 Flexible Data Placement Supported: Not Supported 00:18:50.102 00:18:50.102 Controller Memory Buffer Support 00:18:50.102 ================================ 00:18:50.102 Supported: No 00:18:50.102 00:18:50.102 Persistent Memory Region Support 00:18:50.102 ================================ 00:18:50.102 Supported: No 00:18:50.102 00:18:50.102 Admin Command Set Attributes 00:18:50.102 ============================ 00:18:50.102 Security Send/Receive: Not Supported 00:18:50.102 Format NVM: Not Supported 00:18:50.102 Firmware Activate/Download: Not Supported 00:18:50.102 Namespace Management: Not Supported 00:18:50.102 Device Self-Test: Not Supported 00:18:50.102 Directives: Not Supported 00:18:50.102 NVMe-MI: Not Supported 00:18:50.102 Virtualization Management: Not Supported 00:18:50.102 Doorbell Buffer Config: Not Supported 00:18:50.102 Get LBA Status Capability: Not Supported 00:18:50.102 Command & Feature Lockdown Capability: Not Supported 00:18:50.102 Abort Command Limit: 4 00:18:50.102 Async Event Request Limit: 4 00:18:50.102 Number of Firmware Slots: N/A 00:18:50.102 Firmware Slot 1 Read-Only: N/A 00:18:50.102 Firmware Activation Without Reset: N/A 00:18:50.102 Multiple Update Detection Support: N/A 00:18:50.102 Firmware Update Granularity: No Information Provided 00:18:50.102 Per-Namespace SMART Log: No 00:18:50.102 Asymmetric Namespace Access Log Page: Not Supported 00:18:50.102 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:50.102 Command Effects Log Page: Supported 00:18:50.102 Get Log Page Extended Data: Supported 00:18:50.102 Telemetry Log Pages: Not Supported 00:18:50.102 Persistent Event Log Pages: Not Supported 00:18:50.102 Supported Log Pages Log Page: May Support 00:18:50.102 Commands Supported & Effects Log Page: Not Supported 00:18:50.102 Feature Identifiers & Effects Log Page:May Support 00:18:50.102 NVMe-MI Commands & Effects Log Page: May Support 00:18:50.102 Data Area 4 for Telemetry Log: Not Supported 00:18:50.102 Error Log Page Entries Supported: 128 00:18:50.102 Keep Alive: Supported 00:18:50.102 Keep Alive Granularity: 10000 ms 00:18:50.102 00:18:50.102 NVM Command Set Attributes 00:18:50.102 ========================== 00:18:50.102 Submission Queue Entry Size 00:18:50.102 Max: 64 00:18:50.102 Min: 64 00:18:50.102 Completion Queue Entry Size 00:18:50.102 Max: 16 00:18:50.102 Min: 16 00:18:50.102 Number of Namespaces: 32 00:18:50.102 Compare Command: Supported 00:18:50.102 Write Uncorrectable Command: Not Supported 00:18:50.102 Dataset Management Command: Supported 00:18:50.102 Write Zeroes Command: Supported 00:18:50.102 Set Features Save Field: Not Supported 00:18:50.102 Reservations: Not Supported 00:18:50.102 Timestamp: Not Supported 00:18:50.102 Copy: Supported 00:18:50.102 Volatile Write Cache: Present 00:18:50.102 Atomic Write Unit (Normal): 1 00:18:50.102 Atomic Write Unit (PFail): 1 00:18:50.102 Atomic Compare & Write Unit: 1 00:18:50.102 Fused Compare & Write: Supported 00:18:50.102 Scatter-Gather List 00:18:50.102 SGL Command Set: Supported (Dword aligned) 00:18:50.102 SGL Keyed: Not Supported 00:18:50.102 SGL Bit Bucket Descriptor: Not Supported 00:18:50.102 SGL Metadata Pointer: Not Supported 00:18:50.102 Oversized SGL: Not Supported 00:18:50.102 SGL Metadata Address: Not Supported 00:18:50.102 SGL Offset: Not Supported 00:18:50.102 Transport SGL Data Block: Not Supported 00:18:50.102 Replay Protected Memory Block: Not Supported 00:18:50.102 00:18:50.102 Firmware Slot Information 00:18:50.102 ========================= 00:18:50.102 Active slot: 1 00:18:50.102 Slot 1 Firmware Revision: 24.09 00:18:50.102 00:18:50.102 00:18:50.102 Commands Supported and Effects 00:18:50.102 ============================== 00:18:50.102 Admin Commands 00:18:50.102 -------------- 00:18:50.102 Get Log Page (02h): Supported 00:18:50.102 Identify (06h): Supported 00:18:50.102 Abort (08h): Supported 00:18:50.102 Set Features (09h): Supported 00:18:50.102 Get Features (0Ah): Supported 00:18:50.102 Asynchronous Event Request (0Ch): Supported 00:18:50.102 Keep Alive (18h): Supported 00:18:50.102 I/O Commands 00:18:50.102 ------------ 00:18:50.102 Flush (00h): Supported LBA-Change 00:18:50.102 Write (01h): Supported LBA-Change 00:18:50.103 Read (02h): Supported 00:18:50.103 Compare (05h): Supported 00:18:50.103 Write Zeroes (08h): Supported LBA-Change 00:18:50.103 Dataset Management (09h): Supported LBA-Change 00:18:50.103 Copy (19h): Supported LBA-Change 00:18:50.103 00:18:50.103 Error Log 00:18:50.103 ========= 00:18:50.103 00:18:50.103 Arbitration 00:18:50.103 =========== 00:18:50.103 Arbitration Burst: 1 00:18:50.103 00:18:50.103 Power Management 00:18:50.103 ================ 00:18:50.103 Number of Power States: 1 00:18:50.103 Current Power State: Power State #0 00:18:50.103 Power State #0: 00:18:50.103 Max Power: 0.00 W 00:18:50.103 Non-Operational State: Operational 00:18:50.103 Entry Latency: Not Reported 00:18:50.103 Exit Latency: Not Reported 00:18:50.103 Relative Read Throughput: 0 00:18:50.103 Relative Read Latency: 0 00:18:50.103 Relative Write Throughput: 0 00:18:50.103 Relative Write Latency: 0 00:18:50.103 Idle Power: Not Reported 00:18:50.103 Active Power: Not Reported 00:18:50.103 Non-Operational Permissive Mode: Not Supported 00:18:50.103 00:18:50.103 Health Information 00:18:50.103 ================== 00:18:50.103 Critical Warnings: 00:18:50.103 Available Spare Space: OK 00:18:50.103 Temperature: OK 00:18:50.103 Device Reliability: OK 00:18:50.103 Read Only: No 00:18:50.103 Volatile Memory Backup: OK 00:18:50.103 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:50.103 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:50.103 Available Spare: 0% 00:18:50.103 Available Sp[2024-07-25 13:46:46.798243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:50.103 [2024-07-25 13:46:46.798255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:50.103 [2024-07-25 13:46:46.798284] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:18:50.103 [2024-07-25 13:46:46.798294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.103 [2024-07-25 13:46:46.798302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.103 [2024-07-25 13:46:46.798311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.103 [2024-07-25 13:46:46.798319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.103 [2024-07-25 13:46:46.800724] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:50.103 [2024-07-25 13:46:46.800736] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:50.103 [2024-07-25 13:46:46.801260] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:50.103 [2024-07-25 13:46:46.801308] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:18:50.103 [2024-07-25 13:46:46.801315] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:18:50.103 [2024-07-25 13:46:46.802261] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:50.103 [2024-07-25 13:46:46.802273] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:18:50.103 [2024-07-25 13:46:46.802323] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:50.103 [2024-07-25 13:46:46.805773] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:50.103 are Threshold: 0% 00:18:50.103 Life Percentage Used: 0% 00:18:50.103 Data Units Read: 0 00:18:50.103 Data Units Written: 0 00:18:50.103 Host Read Commands: 0 00:18:50.103 Host Write Commands: 0 00:18:50.103 Controller Busy Time: 0 minutes 00:18:50.103 Power Cycles: 0 00:18:50.103 Power On Hours: 0 hours 00:18:50.103 Unsafe Shutdowns: 0 00:18:50.103 Unrecoverable Media Errors: 0 00:18:50.103 Lifetime Error Log Entries: 0 00:18:50.103 Warning Temperature Time: 0 minutes 00:18:50.103 Critical Temperature Time: 0 minutes 00:18:50.103 00:18:50.103 Number of Queues 00:18:50.103 ================ 00:18:50.103 Number of I/O Submission Queues: 127 00:18:50.103 Number of I/O Completion Queues: 127 00:18:50.103 00:18:50.103 Active Namespaces 00:18:50.103 ================= 00:18:50.103 Namespace ID:1 00:18:50.103 Error Recovery Timeout: Unlimited 00:18:50.103 Command Set Identifier: NVM (00h) 00:18:50.103 Deallocate: Supported 00:18:50.103 Deallocated/Unwritten Error: Not Supported 00:18:50.103 Deallocated Read Value: Unknown 00:18:50.103 Deallocate in Write Zeroes: Not Supported 00:18:50.103 Deallocated Guard Field: 0xFFFF 00:18:50.103 Flush: Supported 00:18:50.103 Reservation: Supported 00:18:50.103 Namespace Sharing Capabilities: Multiple Controllers 00:18:50.103 Size (in LBAs): 131072 (0GiB) 00:18:50.103 Capacity (in LBAs): 131072 (0GiB) 00:18:50.103 Utilization (in LBAs): 131072 (0GiB) 00:18:50.103 NGUID: 6607F8F0313E45C7B65E0CA198C66D0A 00:18:50.103 UUID: 6607f8f0-313e-45c7-b65e-0ca198c66d0a 00:18:50.103 Thin Provisioning: Not Supported 00:18:50.103 Per-NS Atomic Units: Yes 00:18:50.103 Atomic Boundary Size (Normal): 0 00:18:50.103 Atomic Boundary Size (PFail): 0 00:18:50.103 Atomic Boundary Offset: 0 00:18:50.103 Maximum Single Source Range Length: 65535 00:18:50.103 Maximum Copy Length: 65535 00:18:50.103 Maximum Source Range Count: 1 00:18:50.103 NGUID/EUI64 Never Reused: No 00:18:50.103 Namespace Write Protected: No 00:18:50.103 Number of LBA Formats: 1 00:18:50.103 Current LBA Format: LBA Format #00 00:18:50.103 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:50.103 00:18:50.103 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:50.103 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.403 [2024-07-25 13:46:47.014225] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:55.676 Initializing NVMe Controllers 00:18:55.676 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:55.676 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:55.676 Initialization complete. Launching workers. 00:18:55.676 ======================================================== 00:18:55.676 Latency(us) 00:18:55.676 Device Information : IOPS MiB/s Average min max 00:18:55.676 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39961.72 156.10 3202.91 921.36 7669.92 00:18:55.676 ======================================================== 00:18:55.676 Total : 39961.72 156.10 3202.91 921.36 7669.92 00:18:55.676 00:18:55.676 [2024-07-25 13:46:52.035823] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:55.676 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:55.676 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.676 [2024-07-25 13:46:52.257869] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:00.952 Initializing NVMe Controllers 00:19:00.952 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:00.952 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:00.952 Initialization complete. Launching workers. 00:19:00.952 ======================================================== 00:19:00.952 Latency(us) 00:19:00.952 Device Information : IOPS MiB/s Average min max 00:19:00.952 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16038.96 62.65 7979.94 6978.58 8983.44 00:19:00.952 ======================================================== 00:19:00.952 Total : 16038.96 62.65 7979.94 6978.58 8983.44 00:19:00.952 00:19:00.952 [2024-07-25 13:46:57.293532] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:00.952 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:00.952 EAL: No free 2048 kB hugepages reported on node 1 00:19:00.952 [2024-07-25 13:46:57.511600] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:06.232 [2024-07-25 13:47:02.584975] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:06.232 Initializing NVMe Controllers 00:19:06.232 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:06.232 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:06.233 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:19:06.233 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:19:06.233 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:19:06.233 Initialization complete. Launching workers. 00:19:06.233 Starting thread on core 2 00:19:06.233 Starting thread on core 3 00:19:06.233 Starting thread on core 1 00:19:06.233 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:19:06.233 EAL: No free 2048 kB hugepages reported on node 1 00:19:06.233 [2024-07-25 13:47:02.880711] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:09.522 [2024-07-25 13:47:05.944077] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:09.522 Initializing NVMe Controllers 00:19:09.522 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:09.522 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:09.522 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:19:09.522 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:19:09.522 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:19:09.522 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:19:09.522 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:09.522 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:09.522 Initialization complete. Launching workers. 00:19:09.522 Starting thread on core 1 with urgent priority queue 00:19:09.522 Starting thread on core 2 with urgent priority queue 00:19:09.522 Starting thread on core 3 with urgent priority queue 00:19:09.522 Starting thread on core 0 with urgent priority queue 00:19:09.522 SPDK bdev Controller (SPDK1 ) core 0: 9525.67 IO/s 10.50 secs/100000 ios 00:19:09.522 SPDK bdev Controller (SPDK1 ) core 1: 7538.00 IO/s 13.27 secs/100000 ios 00:19:09.522 SPDK bdev Controller (SPDK1 ) core 2: 9283.33 IO/s 10.77 secs/100000 ios 00:19:09.522 SPDK bdev Controller (SPDK1 ) core 3: 7490.67 IO/s 13.35 secs/100000 ios 00:19:09.522 ======================================================== 00:19:09.522 00:19:09.522 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:09.522 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.522 [2024-07-25 13:47:06.230452] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:09.522 Initializing NVMe Controllers 00:19:09.522 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:09.522 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:09.522 Namespace ID: 1 size: 0GB 00:19:09.522 Initialization complete. 00:19:09.522 INFO: using host memory buffer for IO 00:19:09.522 Hello world! 00:19:09.522 [2024-07-25 13:47:06.264829] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:09.522 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:09.522 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.782 [2024-07-25 13:47:06.559245] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:10.719 Initializing NVMe Controllers 00:19:10.719 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:10.719 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:10.719 Initialization complete. Launching workers. 00:19:10.719 submit (in ns) avg, min, max = 5630.3, 3021.6, 3999006.4 00:19:10.719 complete (in ns) avg, min, max = 18941.5, 1671.2, 7986889.6 00:19:10.719 00:19:10.719 Submit histogram 00:19:10.719 ================ 00:19:10.719 Range in us Cumulative Count 00:19:10.719 3.021 - 3.034: 0.0059% ( 1) 00:19:10.719 3.059 - 3.072: 0.0294% ( 4) 00:19:10.719 3.072 - 3.085: 0.0707% ( 7) 00:19:10.719 3.085 - 3.098: 0.3534% ( 48) 00:19:10.719 3.098 - 3.110: 1.3194% ( 164) 00:19:10.719 3.110 - 3.123: 3.0805% ( 299) 00:19:10.719 3.123 - 3.136: 6.0196% ( 499) 00:19:10.719 3.136 - 3.149: 9.9482% ( 667) 00:19:10.719 3.149 - 3.162: 15.1490% ( 883) 00:19:10.719 3.162 - 3.174: 21.7222% ( 1116) 00:19:10.719 3.174 - 3.187: 27.4708% ( 976) 00:19:10.719 3.187 - 3.200: 34.0028% ( 1109) 00:19:10.719 3.200 - 3.213: 40.7645% ( 1148) 00:19:10.719 3.213 - 3.226: 47.7971% ( 1194) 00:19:10.719 3.226 - 3.238: 54.3586% ( 1114) 00:19:10.719 3.238 - 3.251: 57.9927% ( 617) 00:19:10.720 3.251 - 3.264: 60.6785% ( 456) 00:19:10.720 3.264 - 3.277: 63.5293% ( 484) 00:19:10.720 3.277 - 3.302: 68.2589% ( 803) 00:19:10.720 3.302 - 3.328: 72.4526% ( 712) 00:19:10.720 3.328 - 3.354: 80.0448% ( 1289) 00:19:10.720 3.354 - 3.379: 85.2986% ( 892) 00:19:10.720 3.379 - 3.405: 87.3307% ( 345) 00:19:10.720 3.405 - 3.430: 88.2259% ( 152) 00:19:10.720 3.430 - 3.456: 89.1389% ( 155) 00:19:10.720 3.456 - 3.482: 90.6055% ( 249) 00:19:10.720 3.482 - 3.507: 92.2841% ( 285) 00:19:10.720 3.507 - 3.533: 94.0335% ( 297) 00:19:10.720 3.533 - 3.558: 95.3705% ( 227) 00:19:10.720 3.558 - 3.584: 96.5603% ( 202) 00:19:10.720 3.584 - 3.610: 97.7088% ( 195) 00:19:10.720 3.610 - 3.635: 98.5805% ( 148) 00:19:10.720 3.635 - 3.661: 99.0576% ( 81) 00:19:10.720 3.661 - 3.686: 99.3639% ( 52) 00:19:10.720 3.686 - 3.712: 99.5406% ( 30) 00:19:10.720 3.712 - 3.738: 99.6407% ( 17) 00:19:10.720 3.738 - 3.763: 99.6761% ( 6) 00:19:10.720 3.763 - 3.789: 99.6819% ( 1) 00:19:10.720 3.789 - 3.814: 99.6878% ( 1) 00:19:10.720 3.942 - 3.968: 99.6937% ( 1) 00:19:10.720 5.299 - 5.325: 99.6996% ( 1) 00:19:10.720 5.402 - 5.427: 99.7055% ( 1) 00:19:10.720 5.478 - 5.504: 99.7114% ( 1) 00:19:10.720 5.632 - 5.658: 99.7173% ( 1) 00:19:10.720 5.683 - 5.709: 99.7291% ( 2) 00:19:10.720 5.709 - 5.734: 99.7350% ( 1) 00:19:10.720 5.760 - 5.786: 99.7408% ( 1) 00:19:10.720 5.837 - 5.862: 99.7467% ( 1) 00:19:10.720 5.888 - 5.914: 99.7526% ( 1) 00:19:10.720 5.914 - 5.939: 99.7585% ( 1) 00:19:10.720 5.939 - 5.965: 99.7644% ( 1) 00:19:10.720 6.221 - 6.246: 99.7703% ( 1) 00:19:10.720 6.246 - 6.272: 99.7762% ( 1) 00:19:10.720 6.298 - 6.323: 99.7880% ( 2) 00:19:10.720 6.323 - 6.349: 99.7939% ( 1) 00:19:10.720 6.349 - 6.374: 99.8056% ( 2) 00:19:10.720 6.374 - 6.400: 99.8115% ( 1) 00:19:10.720 6.477 - 6.502: 99.8174% ( 1) 00:19:10.720 6.554 - 6.605: 99.8233% ( 1) 00:19:10.720 6.605 - 6.656: 99.8292% ( 1) 00:19:10.720 6.656 - 6.707: 99.8410% ( 2) 00:19:10.720 6.758 - 6.810: 99.8469% ( 1) 00:19:10.720 6.810 - 6.861: 99.8586% ( 2) 00:19:10.720 6.861 - 6.912: 99.8645% ( 1) 00:19:10.720 6.912 - 6.963: 99.8704% ( 1) 00:19:10.720 6.963 - 7.014: 99.8763% ( 1) 00:19:10.720 7.066 - 7.117: 99.8822% ( 1) 00:19:10.720 7.117 - 7.168: 99.8940% ( 2) 00:19:10.720 7.168 - 7.219: 99.8999% ( 1) 00:19:10.720 7.219 - 7.270: 99.9117% ( 2) 00:19:10.720 7.270 - 7.322: 99.9293% ( 3) 00:19:10.720 7.424 - 7.475: 99.9352% ( 1) 00:19:10.720 7.526 - 7.578: 99.9411% ( 1) 00:19:10.720 3984.589 - 4010.803: 100.0000% ( 10) 00:19:10.720 00:19:10.720 Complete histogram 00:19:10.720 ================== 00:19:10.720 Range in us Cumulative Count 00:19:10.720 1.664 - 1.677: 0.0177% ( 3) 00:19:10.720 1.677 - 1.690: 0.0530% ( 6) 00:19:10.720 1.690 - 1.702: 0.0883% ( 6) 00:19:10.720 1.702 - [2024-07-25 13:47:07.575137] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:10.980 1.715: 1.2369% ( 195) 00:19:10.980 1.715 - 1.728: 12.0744% ( 1840) 00:19:10.980 1.728 - 1.741: 20.3086% ( 1398) 00:19:10.980 1.741 - 1.754: 24.8439% ( 770) 00:19:10.980 1.754 - 1.766: 59.1707% ( 5828) 00:19:10.980 1.766 - 1.779: 89.3804% ( 5129) 00:19:10.980 1.779 - 1.792: 94.9287% ( 942) 00:19:10.980 1.792 - 1.805: 97.4320% ( 425) 00:19:10.980 1.805 - 1.818: 98.1093% ( 115) 00:19:10.980 1.818 - 1.830: 98.4450% ( 57) 00:19:10.980 1.830 - 1.843: 98.9221% ( 81) 00:19:10.980 1.843 - 1.856: 99.1990% ( 47) 00:19:10.980 1.856 - 1.869: 99.2932% ( 16) 00:19:10.980 1.869 - 1.882: 99.3344% ( 7) 00:19:10.980 1.882 - 1.894: 99.3521% ( 3) 00:19:10.980 1.894 - 1.907: 99.3580% ( 1) 00:19:10.980 1.907 - 1.920: 99.3698% ( 2) 00:19:10.980 1.920 - 1.933: 99.3757% ( 1) 00:19:10.980 1.933 - 1.946: 99.3816% ( 1) 00:19:10.980 1.946 - 1.958: 99.3933% ( 2) 00:19:10.980 1.984 - 1.997: 99.3992% ( 1) 00:19:10.980 2.048 - 2.061: 99.4051% ( 1) 00:19:10.980 2.240 - 2.253: 99.4110% ( 1) 00:19:10.980 2.726 - 2.739: 99.4169% ( 1) 00:19:10.980 3.942 - 3.968: 99.4228% ( 1) 00:19:10.980 3.968 - 3.994: 99.4287% ( 1) 00:19:10.980 3.994 - 4.019: 99.4346% ( 1) 00:19:10.980 4.122 - 4.147: 99.4405% ( 1) 00:19:10.980 4.224 - 4.250: 99.4463% ( 1) 00:19:10.980 4.352 - 4.378: 99.4522% ( 1) 00:19:10.980 4.506 - 4.531: 99.4640% ( 2) 00:19:10.980 4.685 - 4.710: 99.4699% ( 1) 00:19:10.980 4.710 - 4.736: 99.4758% ( 1) 00:19:10.980 4.915 - 4.941: 99.4817% ( 1) 00:19:10.980 5.043 - 5.069: 99.4876% ( 1) 00:19:10.980 5.069 - 5.094: 99.4935% ( 1) 00:19:10.980 5.222 - 5.248: 99.4994% ( 1) 00:19:10.980 5.274 - 5.299: 99.5052% ( 1) 00:19:10.980 5.376 - 5.402: 99.5111% ( 1) 00:19:10.980 5.427 - 5.453: 99.5170% ( 1) 00:19:10.980 5.555 - 5.581: 99.5229% ( 1) 00:19:10.980 5.683 - 5.709: 99.5288% ( 1) 00:19:10.980 5.786 - 5.811: 99.5347% ( 1) 00:19:10.980 5.888 - 5.914: 99.5406% ( 1) 00:19:10.980 6.067 - 6.093: 99.5465% ( 1) 00:19:10.980 6.221 - 6.246: 99.5524% ( 1) 00:19:10.980 6.374 - 6.400: 99.5583% ( 1) 00:19:10.980 9.216 - 9.267: 99.5641% ( 1) 00:19:10.980 10.803 - 10.854: 99.5700% ( 1) 00:19:10.980 11.520 - 11.571: 99.5759% ( 1) 00:19:10.980 3984.589 - 4010.803: 99.9941% ( 71) 00:19:10.980 7969.178 - 8021.606: 100.0000% ( 1) 00:19:10.980 00:19:10.980 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:19:10.980 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:10.980 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:19:10.980 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:19:10.980 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:10.980 [ 00:19:10.980 { 00:19:10.980 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:10.980 "subtype": "Discovery", 00:19:10.980 "listen_addresses": [], 00:19:10.980 "allow_any_host": true, 00:19:10.980 "hosts": [] 00:19:10.980 }, 00:19:10.980 { 00:19:10.980 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:10.980 "subtype": "NVMe", 00:19:10.980 "listen_addresses": [ 00:19:10.980 { 00:19:10.980 "trtype": "VFIOUSER", 00:19:10.980 "adrfam": "IPv4", 00:19:10.980 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:10.980 "trsvcid": "0" 00:19:10.980 } 00:19:10.980 ], 00:19:10.980 "allow_any_host": true, 00:19:10.980 "hosts": [], 00:19:10.980 "serial_number": "SPDK1", 00:19:10.980 "model_number": "SPDK bdev Controller", 00:19:10.980 "max_namespaces": 32, 00:19:10.980 "min_cntlid": 1, 00:19:10.980 "max_cntlid": 65519, 00:19:10.980 "namespaces": [ 00:19:10.980 { 00:19:10.980 "nsid": 1, 00:19:10.980 "bdev_name": "Malloc1", 00:19:10.980 "name": "Malloc1", 00:19:10.980 "nguid": "6607F8F0313E45C7B65E0CA198C66D0A", 00:19:10.980 "uuid": "6607f8f0-313e-45c7-b65e-0ca198c66d0a" 00:19:10.980 } 00:19:10.980 ] 00:19:10.980 }, 00:19:10.980 { 00:19:10.980 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:10.980 "subtype": "NVMe", 00:19:10.980 "listen_addresses": [ 00:19:10.980 { 00:19:10.980 "trtype": "VFIOUSER", 00:19:10.980 "adrfam": "IPv4", 00:19:10.980 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:10.980 "trsvcid": "0" 00:19:10.980 } 00:19:10.980 ], 00:19:10.980 "allow_any_host": true, 00:19:10.980 "hosts": [], 00:19:10.980 "serial_number": "SPDK2", 00:19:10.980 "model_number": "SPDK bdev Controller", 00:19:10.980 "max_namespaces": 32, 00:19:10.981 "min_cntlid": 1, 00:19:10.981 "max_cntlid": 65519, 00:19:10.981 "namespaces": [ 00:19:10.981 { 00:19:10.981 "nsid": 1, 00:19:10.981 "bdev_name": "Malloc2", 00:19:10.981 "name": "Malloc2", 00:19:10.981 "nguid": "4B146C8FEC724E6D87A194608314193F", 00:19:10.981 "uuid": "4b146c8f-ec72-4e6d-87a1-94608314193f" 00:19:10.981 } 00:19:10.981 ] 00:19:10.981 } 00:19:10.981 ] 00:19:10.981 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:10.981 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:19:10.981 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=280474 00:19:10.981 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:10.981 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:19:10.981 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:10.981 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:10.981 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:19:10.981 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:10.981 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:19:10.981 EAL: No free 2048 kB hugepages reported on node 1 00:19:11.240 [2024-07-25 13:47:07.953555] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:11.240 Malloc3 00:19:11.240 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:19:11.499 [2024-07-25 13:47:08.178089] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:11.499 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:11.499 Asynchronous Event Request test 00:19:11.499 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:11.500 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:11.500 Registering asynchronous event callbacks... 00:19:11.500 Starting namespace attribute notice tests for all controllers... 00:19:11.500 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:11.500 aer_cb - Changed Namespace 00:19:11.500 Cleaning up... 00:19:11.500 [ 00:19:11.500 { 00:19:11.500 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:11.500 "subtype": "Discovery", 00:19:11.500 "listen_addresses": [], 00:19:11.500 "allow_any_host": true, 00:19:11.500 "hosts": [] 00:19:11.500 }, 00:19:11.500 { 00:19:11.500 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:11.500 "subtype": "NVMe", 00:19:11.500 "listen_addresses": [ 00:19:11.500 { 00:19:11.500 "trtype": "VFIOUSER", 00:19:11.500 "adrfam": "IPv4", 00:19:11.500 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:11.500 "trsvcid": "0" 00:19:11.500 } 00:19:11.500 ], 00:19:11.500 "allow_any_host": true, 00:19:11.500 "hosts": [], 00:19:11.500 "serial_number": "SPDK1", 00:19:11.500 "model_number": "SPDK bdev Controller", 00:19:11.500 "max_namespaces": 32, 00:19:11.500 "min_cntlid": 1, 00:19:11.500 "max_cntlid": 65519, 00:19:11.500 "namespaces": [ 00:19:11.500 { 00:19:11.500 "nsid": 1, 00:19:11.500 "bdev_name": "Malloc1", 00:19:11.500 "name": "Malloc1", 00:19:11.500 "nguid": "6607F8F0313E45C7B65E0CA198C66D0A", 00:19:11.500 "uuid": "6607f8f0-313e-45c7-b65e-0ca198c66d0a" 00:19:11.500 }, 00:19:11.500 { 00:19:11.500 "nsid": 2, 00:19:11.500 "bdev_name": "Malloc3", 00:19:11.500 "name": "Malloc3", 00:19:11.500 "nguid": "757ED9EE759540289BC1B34554E81AE6", 00:19:11.500 "uuid": "757ed9ee-7595-4028-9bc1-b34554e81ae6" 00:19:11.500 } 00:19:11.500 ] 00:19:11.500 }, 00:19:11.500 { 00:19:11.500 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:11.500 "subtype": "NVMe", 00:19:11.500 "listen_addresses": [ 00:19:11.500 { 00:19:11.500 "trtype": "VFIOUSER", 00:19:11.500 "adrfam": "IPv4", 00:19:11.500 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:11.500 "trsvcid": "0" 00:19:11.500 } 00:19:11.500 ], 00:19:11.500 "allow_any_host": true, 00:19:11.500 "hosts": [], 00:19:11.500 "serial_number": "SPDK2", 00:19:11.500 "model_number": "SPDK bdev Controller", 00:19:11.500 "max_namespaces": 32, 00:19:11.500 "min_cntlid": 1, 00:19:11.500 "max_cntlid": 65519, 00:19:11.500 "namespaces": [ 00:19:11.500 { 00:19:11.500 "nsid": 1, 00:19:11.500 "bdev_name": "Malloc2", 00:19:11.500 "name": "Malloc2", 00:19:11.500 "nguid": "4B146C8FEC724E6D87A194608314193F", 00:19:11.500 "uuid": "4b146c8f-ec72-4e6d-87a1-94608314193f" 00:19:11.500 } 00:19:11.500 ] 00:19:11.500 } 00:19:11.500 ] 00:19:11.500 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 280474 00:19:11.500 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:11.761 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:11.761 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:19:11.761 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:11.761 [2024-07-25 13:47:08.402270] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:19:11.761 [2024-07-25 13:47:08.402300] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid280730 ] 00:19:11.761 EAL: No free 2048 kB hugepages reported on node 1 00:19:11.761 [2024-07-25 13:47:08.415106] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:11.761 [2024-07-25 13:47:08.430919] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:19:11.761 [2024-07-25 13:47:08.442579] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:11.761 [2024-07-25 13:47:08.442603] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5bbaf5d000 00:19:11.761 [2024-07-25 13:47:08.443582] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:11.761 [2024-07-25 13:47:08.444584] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:11.761 [2024-07-25 13:47:08.445585] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:11.761 [2024-07-25 13:47:08.446605] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:11.761 [2024-07-25 13:47:08.447600] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:11.761 [2024-07-25 13:47:08.448602] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:11.761 [2024-07-25 13:47:08.449612] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:11.761 [2024-07-25 13:47:08.450617] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:11.761 [2024-07-25 13:47:08.451629] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:11.761 [2024-07-25 13:47:08.451641] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f5bb9d22000 00:19:11.761 [2024-07-25 13:47:08.452539] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:11.761 [2024-07-25 13:47:08.463737] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:19:11.761 [2024-07-25 13:47:08.463763] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:19:11.761 [2024-07-25 13:47:08.468856] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:11.761 [2024-07-25 13:47:08.468894] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:11.761 [2024-07-25 13:47:08.468966] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:19:11.761 [2024-07-25 13:47:08.468983] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:19:11.761 [2024-07-25 13:47:08.468990] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:19:11.761 [2024-07-25 13:47:08.469862] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:19:11.761 [2024-07-25 13:47:08.469876] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:19:11.761 [2024-07-25 13:47:08.469884] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:19:11.761 [2024-07-25 13:47:08.470860] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:11.761 [2024-07-25 13:47:08.470870] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:19:11.761 [2024-07-25 13:47:08.470879] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:19:11.761 [2024-07-25 13:47:08.471871] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:19:11.761 [2024-07-25 13:47:08.471881] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:11.761 [2024-07-25 13:47:08.472879] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:19:11.761 [2024-07-25 13:47:08.472889] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:19:11.762 [2024-07-25 13:47:08.472895] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:19:11.762 [2024-07-25 13:47:08.472903] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:11.762 [2024-07-25 13:47:08.473010] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:19:11.762 [2024-07-25 13:47:08.473016] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:11.762 [2024-07-25 13:47:08.473023] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:19:11.762 [2024-07-25 13:47:08.473886] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:19:11.762 [2024-07-25 13:47:08.474890] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:19:11.762 [2024-07-25 13:47:08.475898] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:11.762 [2024-07-25 13:47:08.476901] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:11.762 [2024-07-25 13:47:08.476943] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:11.762 [2024-07-25 13:47:08.477918] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:19:11.762 [2024-07-25 13:47:08.477928] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:11.762 [2024-07-25 13:47:08.477934] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:19:11.762 [2024-07-25 13:47:08.477955] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:19:11.762 [2024-07-25 13:47:08.477963] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:19:11.762 [2024-07-25 13:47:08.477976] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:11.762 [2024-07-25 13:47:08.477982] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:11.762 [2024-07-25 13:47:08.477987] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:11.762 [2024-07-25 13:47:08.478000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:11.762 [2024-07-25 13:47:08.486725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:11.762 [2024-07-25 13:47:08.486740] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:19:11.762 [2024-07-25 13:47:08.486746] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:19:11.762 [2024-07-25 13:47:08.486752] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:19:11.762 [2024-07-25 13:47:08.486758] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:11.762 [2024-07-25 13:47:08.486764] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:19:11.762 [2024-07-25 13:47:08.486770] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:19:11.762 [2024-07-25 13:47:08.486777] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:19:11.762 [2024-07-25 13:47:08.486785] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:19:11.762 [2024-07-25 13:47:08.486799] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:11.762 [2024-07-25 13:47:08.491524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:11.762 [2024-07-25 13:47:08.491542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:11.762 [2024-07-25 13:47:08.491552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:11.762 [2024-07-25 13:47:08.491561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:11.762 [2024-07-25 13:47:08.491570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:11.762 [2024-07-25 13:47:08.491576] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:19:11.762 [2024-07-25 13:47:08.491636] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:11.762 [2024-07-25 13:47:08.491647] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:11.762 [2024-07-25 13:47:08.497720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:11.762 [2024-07-25 13:47:08.497730] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:19:11.762 [2024-07-25 13:47:08.497740] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:11.762 [2024-07-25 13:47:08.497751] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:19:11.762 [2024-07-25 13:47:08.497758] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:19:11.762 [2024-07-25 13:47:08.497768] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:11.762 [2024-07-25 13:47:08.505720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:11.762 [2024-07-25 13:47:08.505774] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:19:11.762 [2024-07-25 13:47:08.505783] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:19:11.762 [2024-07-25 13:47:08.505792] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:11.762 [2024-07-25 13:47:08.505798] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:11.762 [2024-07-25 13:47:08.505803] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:11.762 [2024-07-25 13:47:08.505810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:11.762 [2024-07-25 13:47:08.513719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:11.762 [2024-07-25 13:47:08.513732] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:19:11.762 [2024-07-25 13:47:08.513746] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:19:11.762 [2024-07-25 13:47:08.513755] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:19:11.762 [2024-07-25 13:47:08.513764] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:11.762 [2024-07-25 13:47:08.513769] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:11.762 [2024-07-25 13:47:08.513774] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:11.762 [2024-07-25 13:47:08.513781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:11.762 [2024-07-25 13:47:08.521720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:11.762 [2024-07-25 13:47:08.521735] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:11.762 [2024-07-25 13:47:08.521745] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:11.762 [2024-07-25 13:47:08.521753] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:11.762 [2024-07-25 13:47:08.521758] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:11.762 [2024-07-25 13:47:08.521763] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:11.762 [2024-07-25 13:47:08.521770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:11.762 [2024-07-25 13:47:08.529720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:11.762 [2024-07-25 13:47:08.529732] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:11.762 [2024-07-25 13:47:08.529741] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:19:11.762 [2024-07-25 13:47:08.529750] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:19:11.762 [2024-07-25 13:47:08.529761] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:19:11.762 [2024-07-25 13:47:08.529768] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:11.762 [2024-07-25 13:47:08.529774] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:19:11.762 [2024-07-25 13:47:08.529780] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:19:11.762 [2024-07-25 13:47:08.529786] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:19:11.762 [2024-07-25 13:47:08.529793] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:19:11.762 [2024-07-25 13:47:08.529811] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:11.762 [2024-07-25 13:47:08.537719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:11.762 [2024-07-25 13:47:08.537734] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:11.762 [2024-07-25 13:47:08.545722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:11.763 [2024-07-25 13:47:08.545737] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:11.763 [2024-07-25 13:47:08.553720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:11.763 [2024-07-25 13:47:08.553735] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:11.763 [2024-07-25 13:47:08.561718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:11.763 [2024-07-25 13:47:08.561736] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:11.763 [2024-07-25 13:47:08.561742] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:11.763 [2024-07-25 13:47:08.561747] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:11.763 [2024-07-25 13:47:08.561752] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:11.763 [2024-07-25 13:47:08.561756] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:11.763 [2024-07-25 13:47:08.561763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:11.763 [2024-07-25 13:47:08.561771] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:11.763 [2024-07-25 13:47:08.561777] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:11.763 [2024-07-25 13:47:08.561783] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:11.763 [2024-07-25 13:47:08.561790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:11.763 [2024-07-25 13:47:08.561797] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:11.763 [2024-07-25 13:47:08.561803] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:11.763 [2024-07-25 13:47:08.561807] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:11.763 [2024-07-25 13:47:08.561814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:11.763 [2024-07-25 13:47:08.561822] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:11.763 [2024-07-25 13:47:08.561828] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:11.763 [2024-07-25 13:47:08.561832] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:11.763 [2024-07-25 13:47:08.561839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:11.763 [2024-07-25 13:47:08.569719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:11.763 [2024-07-25 13:47:08.569735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:11.763 [2024-07-25 13:47:08.569747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:11.763 [2024-07-25 13:47:08.569756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:11.763 ===================================================== 00:19:11.763 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:11.763 ===================================================== 00:19:11.763 Controller Capabilities/Features 00:19:11.763 ================================ 00:19:11.763 Vendor ID: 4e58 00:19:11.763 Subsystem Vendor ID: 4e58 00:19:11.763 Serial Number: SPDK2 00:19:11.763 Model Number: SPDK bdev Controller 00:19:11.763 Firmware Version: 24.09 00:19:11.763 Recommended Arb Burst: 6 00:19:11.763 IEEE OUI Identifier: 8d 6b 50 00:19:11.763 Multi-path I/O 00:19:11.763 May have multiple subsystem ports: Yes 00:19:11.763 May have multiple controllers: Yes 00:19:11.763 Associated with SR-IOV VF: No 00:19:11.763 Max Data Transfer Size: 131072 00:19:11.763 Max Number of Namespaces: 32 00:19:11.763 Max Number of I/O Queues: 127 00:19:11.763 NVMe Specification Version (VS): 1.3 00:19:11.763 NVMe Specification Version (Identify): 1.3 00:19:11.763 Maximum Queue Entries: 256 00:19:11.763 Contiguous Queues Required: Yes 00:19:11.763 Arbitration Mechanisms Supported 00:19:11.763 Weighted Round Robin: Not Supported 00:19:11.763 Vendor Specific: Not Supported 00:19:11.763 Reset Timeout: 15000 ms 00:19:11.763 Doorbell Stride: 4 bytes 00:19:11.763 NVM Subsystem Reset: Not Supported 00:19:11.763 Command Sets Supported 00:19:11.763 NVM Command Set: Supported 00:19:11.763 Boot Partition: Not Supported 00:19:11.763 Memory Page Size Minimum: 4096 bytes 00:19:11.763 Memory Page Size Maximum: 4096 bytes 00:19:11.763 Persistent Memory Region: Not Supported 00:19:11.763 Optional Asynchronous Events Supported 00:19:11.763 Namespace Attribute Notices: Supported 00:19:11.763 Firmware Activation Notices: Not Supported 00:19:11.763 ANA Change Notices: Not Supported 00:19:11.763 PLE Aggregate Log Change Notices: Not Supported 00:19:11.763 LBA Status Info Alert Notices: Not Supported 00:19:11.763 EGE Aggregate Log Change Notices: Not Supported 00:19:11.763 Normal NVM Subsystem Shutdown event: Not Supported 00:19:11.763 Zone Descriptor Change Notices: Not Supported 00:19:11.763 Discovery Log Change Notices: Not Supported 00:19:11.763 Controller Attributes 00:19:11.763 128-bit Host Identifier: Supported 00:19:11.763 Non-Operational Permissive Mode: Not Supported 00:19:11.763 NVM Sets: Not Supported 00:19:11.763 Read Recovery Levels: Not Supported 00:19:11.763 Endurance Groups: Not Supported 00:19:11.763 Predictable Latency Mode: Not Supported 00:19:11.763 Traffic Based Keep ALive: Not Supported 00:19:11.763 Namespace Granularity: Not Supported 00:19:11.763 SQ Associations: Not Supported 00:19:11.763 UUID List: Not Supported 00:19:11.763 Multi-Domain Subsystem: Not Supported 00:19:11.763 Fixed Capacity Management: Not Supported 00:19:11.763 Variable Capacity Management: Not Supported 00:19:11.763 Delete Endurance Group: Not Supported 00:19:11.763 Delete NVM Set: Not Supported 00:19:11.763 Extended LBA Formats Supported: Not Supported 00:19:11.763 Flexible Data Placement Supported: Not Supported 00:19:11.763 00:19:11.763 Controller Memory Buffer Support 00:19:11.763 ================================ 00:19:11.763 Supported: No 00:19:11.763 00:19:11.763 Persistent Memory Region Support 00:19:11.763 ================================ 00:19:11.763 Supported: No 00:19:11.763 00:19:11.763 Admin Command Set Attributes 00:19:11.763 ============================ 00:19:11.763 Security Send/Receive: Not Supported 00:19:11.763 Format NVM: Not Supported 00:19:11.763 Firmware Activate/Download: Not Supported 00:19:11.763 Namespace Management: Not Supported 00:19:11.763 Device Self-Test: Not Supported 00:19:11.763 Directives: Not Supported 00:19:11.763 NVMe-MI: Not Supported 00:19:11.763 Virtualization Management: Not Supported 00:19:11.763 Doorbell Buffer Config: Not Supported 00:19:11.763 Get LBA Status Capability: Not Supported 00:19:11.763 Command & Feature Lockdown Capability: Not Supported 00:19:11.763 Abort Command Limit: 4 00:19:11.763 Async Event Request Limit: 4 00:19:11.763 Number of Firmware Slots: N/A 00:19:11.763 Firmware Slot 1 Read-Only: N/A 00:19:11.763 Firmware Activation Without Reset: N/A 00:19:11.763 Multiple Update Detection Support: N/A 00:19:11.763 Firmware Update Granularity: No Information Provided 00:19:11.763 Per-Namespace SMART Log: No 00:19:11.763 Asymmetric Namespace Access Log Page: Not Supported 00:19:11.763 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:19:11.763 Command Effects Log Page: Supported 00:19:11.763 Get Log Page Extended Data: Supported 00:19:11.763 Telemetry Log Pages: Not Supported 00:19:11.763 Persistent Event Log Pages: Not Supported 00:19:11.763 Supported Log Pages Log Page: May Support 00:19:11.763 Commands Supported & Effects Log Page: Not Supported 00:19:11.763 Feature Identifiers & Effects Log Page:May Support 00:19:11.763 NVMe-MI Commands & Effects Log Page: May Support 00:19:11.763 Data Area 4 for Telemetry Log: Not Supported 00:19:11.763 Error Log Page Entries Supported: 128 00:19:11.763 Keep Alive: Supported 00:19:11.763 Keep Alive Granularity: 10000 ms 00:19:11.763 00:19:11.763 NVM Command Set Attributes 00:19:11.763 ========================== 00:19:11.763 Submission Queue Entry Size 00:19:11.763 Max: 64 00:19:11.763 Min: 64 00:19:11.763 Completion Queue Entry Size 00:19:11.763 Max: 16 00:19:11.763 Min: 16 00:19:11.763 Number of Namespaces: 32 00:19:11.763 Compare Command: Supported 00:19:11.763 Write Uncorrectable Command: Not Supported 00:19:11.763 Dataset Management Command: Supported 00:19:11.763 Write Zeroes Command: Supported 00:19:11.763 Set Features Save Field: Not Supported 00:19:11.763 Reservations: Not Supported 00:19:11.763 Timestamp: Not Supported 00:19:11.763 Copy: Supported 00:19:11.763 Volatile Write Cache: Present 00:19:11.763 Atomic Write Unit (Normal): 1 00:19:11.763 Atomic Write Unit (PFail): 1 00:19:11.763 Atomic Compare & Write Unit: 1 00:19:11.763 Fused Compare & Write: Supported 00:19:11.763 Scatter-Gather List 00:19:11.763 SGL Command Set: Supported (Dword aligned) 00:19:11.764 SGL Keyed: Not Supported 00:19:11.764 SGL Bit Bucket Descriptor: Not Supported 00:19:11.764 SGL Metadata Pointer: Not Supported 00:19:11.764 Oversized SGL: Not Supported 00:19:11.764 SGL Metadata Address: Not Supported 00:19:11.764 SGL Offset: Not Supported 00:19:11.764 Transport SGL Data Block: Not Supported 00:19:11.764 Replay Protected Memory Block: Not Supported 00:19:11.764 00:19:11.764 Firmware Slot Information 00:19:11.764 ========================= 00:19:11.764 Active slot: 1 00:19:11.764 Slot 1 Firmware Revision: 24.09 00:19:11.764 00:19:11.764 00:19:11.764 Commands Supported and Effects 00:19:11.764 ============================== 00:19:11.764 Admin Commands 00:19:11.764 -------------- 00:19:11.764 Get Log Page (02h): Supported 00:19:11.764 Identify (06h): Supported 00:19:11.764 Abort (08h): Supported 00:19:11.764 Set Features (09h): Supported 00:19:11.764 Get Features (0Ah): Supported 00:19:11.764 Asynchronous Event Request (0Ch): Supported 00:19:11.764 Keep Alive (18h): Supported 00:19:11.764 I/O Commands 00:19:11.764 ------------ 00:19:11.764 Flush (00h): Supported LBA-Change 00:19:11.764 Write (01h): Supported LBA-Change 00:19:11.764 Read (02h): Supported 00:19:11.764 Compare (05h): Supported 00:19:11.764 Write Zeroes (08h): Supported LBA-Change 00:19:11.764 Dataset Management (09h): Supported LBA-Change 00:19:11.764 Copy (19h): Supported LBA-Change 00:19:11.764 00:19:11.764 Error Log 00:19:11.764 ========= 00:19:11.764 00:19:11.764 Arbitration 00:19:11.764 =========== 00:19:11.764 Arbitration Burst: 1 00:19:11.764 00:19:11.764 Power Management 00:19:11.764 ================ 00:19:11.764 Number of Power States: 1 00:19:11.764 Current Power State: Power State #0 00:19:11.764 Power State #0: 00:19:11.764 Max Power: 0.00 W 00:19:11.764 Non-Operational State: Operational 00:19:11.764 Entry Latency: Not Reported 00:19:11.764 Exit Latency: Not Reported 00:19:11.764 Relative Read Throughput: 0 00:19:11.764 Relative Read Latency: 0 00:19:11.764 Relative Write Throughput: 0 00:19:11.764 Relative Write Latency: 0 00:19:11.764 Idle Power: Not Reported 00:19:11.764 Active Power: Not Reported 00:19:11.764 Non-Operational Permissive Mode: Not Supported 00:19:11.764 00:19:11.764 Health Information 00:19:11.764 ================== 00:19:11.764 Critical Warnings: 00:19:11.764 Available Spare Space: OK 00:19:11.764 Temperature: OK 00:19:11.764 Device Reliability: OK 00:19:11.764 Read Only: No 00:19:11.764 Volatile Memory Backup: OK 00:19:11.764 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:11.764 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:11.764 Available Spare: 0% 00:19:11.764 Available Sp[2024-07-25 13:47:08.569845] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:11.764 [2024-07-25 13:47:08.577720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:11.764 [2024-07-25 13:47:08.577755] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:19:11.764 [2024-07-25 13:47:08.577766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.764 [2024-07-25 13:47:08.577773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.764 [2024-07-25 13:47:08.577781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.764 [2024-07-25 13:47:08.577789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.764 [2024-07-25 13:47:08.577849] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:11.764 [2024-07-25 13:47:08.577861] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:19:11.764 [2024-07-25 13:47:08.578846] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:11.764 [2024-07-25 13:47:08.581727] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:19:11.764 [2024-07-25 13:47:08.581736] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:19:11.764 [2024-07-25 13:47:08.581867] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:19:11.764 [2024-07-25 13:47:08.581880] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:19:11.764 [2024-07-25 13:47:08.581929] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:19:11.764 [2024-07-25 13:47:08.582890] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:11.764 are Threshold: 0% 00:19:11.764 Life Percentage Used: 0% 00:19:11.764 Data Units Read: 0 00:19:11.764 Data Units Written: 0 00:19:11.764 Host Read Commands: 0 00:19:11.764 Host Write Commands: 0 00:19:11.764 Controller Busy Time: 0 minutes 00:19:11.764 Power Cycles: 0 00:19:11.764 Power On Hours: 0 hours 00:19:11.764 Unsafe Shutdowns: 0 00:19:11.764 Unrecoverable Media Errors: 0 00:19:11.764 Lifetime Error Log Entries: 0 00:19:11.764 Warning Temperature Time: 0 minutes 00:19:11.764 Critical Temperature Time: 0 minutes 00:19:11.764 00:19:11.764 Number of Queues 00:19:11.764 ================ 00:19:11.764 Number of I/O Submission Queues: 127 00:19:11.764 Number of I/O Completion Queues: 127 00:19:11.764 00:19:11.764 Active Namespaces 00:19:11.764 ================= 00:19:11.764 Namespace ID:1 00:19:11.764 Error Recovery Timeout: Unlimited 00:19:11.764 Command Set Identifier: NVM (00h) 00:19:11.764 Deallocate: Supported 00:19:11.764 Deallocated/Unwritten Error: Not Supported 00:19:11.764 Deallocated Read Value: Unknown 00:19:11.764 Deallocate in Write Zeroes: Not Supported 00:19:11.764 Deallocated Guard Field: 0xFFFF 00:19:11.764 Flush: Supported 00:19:11.764 Reservation: Supported 00:19:11.764 Namespace Sharing Capabilities: Multiple Controllers 00:19:11.764 Size (in LBAs): 131072 (0GiB) 00:19:11.764 Capacity (in LBAs): 131072 (0GiB) 00:19:11.764 Utilization (in LBAs): 131072 (0GiB) 00:19:11.764 NGUID: 4B146C8FEC724E6D87A194608314193F 00:19:11.764 UUID: 4b146c8f-ec72-4e6d-87a1-94608314193f 00:19:11.764 Thin Provisioning: Not Supported 00:19:11.764 Per-NS Atomic Units: Yes 00:19:11.764 Atomic Boundary Size (Normal): 0 00:19:11.764 Atomic Boundary Size (PFail): 0 00:19:11.764 Atomic Boundary Offset: 0 00:19:11.764 Maximum Single Source Range Length: 65535 00:19:11.764 Maximum Copy Length: 65535 00:19:11.764 Maximum Source Range Count: 1 00:19:11.764 NGUID/EUI64 Never Reused: No 00:19:11.764 Namespace Write Protected: No 00:19:11.764 Number of LBA Formats: 1 00:19:11.764 Current LBA Format: LBA Format #00 00:19:11.764 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:11.764 00:19:11.764 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:12.024 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.024 [2024-07-25 13:47:08.795729] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:17.312 Initializing NVMe Controllers 00:19:17.312 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:17.312 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:17.312 Initialization complete. Launching workers. 00:19:17.312 ======================================================== 00:19:17.312 Latency(us) 00:19:17.312 Device Information : IOPS MiB/s Average min max 00:19:17.312 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39942.76 156.03 3204.42 918.07 8672.99 00:19:17.312 ======================================================== 00:19:17.312 Total : 39942.76 156.03 3204.42 918.07 8672.99 00:19:17.312 00:19:17.312 [2024-07-25 13:47:13.897974] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:17.312 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:17.312 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.312 [2024-07-25 13:47:14.117632] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:22.580 Initializing NVMe Controllers 00:19:22.580 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:22.580 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:22.580 Initialization complete. Launching workers. 00:19:22.580 ======================================================== 00:19:22.580 Latency(us) 00:19:22.580 Device Information : IOPS MiB/s Average min max 00:19:22.580 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39954.28 156.07 3203.50 933.18 7046.96 00:19:22.580 ======================================================== 00:19:22.580 Total : 39954.28 156.07 3203.50 933.18 7046.96 00:19:22.580 00:19:22.580 [2024-07-25 13:47:19.139040] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:22.580 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:22.580 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.580 [2024-07-25 13:47:19.361143] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:27.934 [2024-07-25 13:47:24.513816] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:27.934 Initializing NVMe Controllers 00:19:27.934 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:27.934 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:27.934 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:19:27.934 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:19:27.934 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:19:27.934 Initialization complete. Launching workers. 00:19:27.934 Starting thread on core 2 00:19:27.934 Starting thread on core 3 00:19:27.934 Starting thread on core 1 00:19:27.934 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:19:27.934 EAL: No free 2048 kB hugepages reported on node 1 00:19:27.934 [2024-07-25 13:47:24.816143] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:31.223 [2024-07-25 13:47:27.875269] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:31.223 Initializing NVMe Controllers 00:19:31.223 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:31.223 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:31.223 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:19:31.223 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:19:31.223 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:19:31.223 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:19:31.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:31.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:31.223 Initialization complete. Launching workers. 00:19:31.223 Starting thread on core 1 with urgent priority queue 00:19:31.223 Starting thread on core 2 with urgent priority queue 00:19:31.223 Starting thread on core 3 with urgent priority queue 00:19:31.223 Starting thread on core 0 with urgent priority queue 00:19:31.223 SPDK bdev Controller (SPDK2 ) core 0: 8729.67 IO/s 11.46 secs/100000 ios 00:19:31.223 SPDK bdev Controller (SPDK2 ) core 1: 9000.33 IO/s 11.11 secs/100000 ios 00:19:31.223 SPDK bdev Controller (SPDK2 ) core 2: 11169.67 IO/s 8.95 secs/100000 ios 00:19:31.223 SPDK bdev Controller (SPDK2 ) core 3: 9246.00 IO/s 10.82 secs/100000 ios 00:19:31.223 ======================================================== 00:19:31.223 00:19:31.223 13:47:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:31.223 EAL: No free 2048 kB hugepages reported on node 1 00:19:31.482 [2024-07-25 13:47:28.168152] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:31.482 Initializing NVMe Controllers 00:19:31.482 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:31.482 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:31.482 Namespace ID: 1 size: 0GB 00:19:31.482 Initialization complete. 00:19:31.482 INFO: using host memory buffer for IO 00:19:31.482 Hello world! 00:19:31.482 [2024-07-25 13:47:28.180228] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:31.482 13:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:31.482 EAL: No free 2048 kB hugepages reported on node 1 00:19:31.740 [2024-07-25 13:47:28.464961] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:32.678 Initializing NVMe Controllers 00:19:32.678 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:32.678 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:32.678 Initialization complete. Launching workers. 00:19:32.678 submit (in ns) avg, min, max = 4926.8, 3032.0, 4000237.6 00:19:32.678 complete (in ns) avg, min, max = 21408.1, 1661.6, 3998425.6 00:19:32.678 00:19:32.678 Submit histogram 00:19:32.678 ================ 00:19:32.678 Range in us Cumulative Count 00:19:32.678 3.021 - 3.034: 0.0059% ( 1) 00:19:32.678 3.034 - 3.046: 0.0296% ( 4) 00:19:32.678 3.046 - 3.059: 0.0474% ( 3) 00:19:32.678 3.059 - 3.072: 0.0652% ( 3) 00:19:32.678 3.072 - 3.085: 0.1719% ( 18) 00:19:32.678 3.085 - 3.098: 0.7644% ( 100) 00:19:32.678 3.098 - 3.110: 2.2637% ( 253) 00:19:32.678 3.110 - 3.123: 4.8889% ( 443) 00:19:32.678 3.123 - 3.136: 9.0667% ( 705) 00:19:32.678 3.136 - 3.149: 13.5585% ( 758) 00:19:32.678 3.149 - 3.162: 18.9630% ( 912) 00:19:32.678 3.162 - 3.174: 24.8711% ( 997) 00:19:32.678 3.174 - 3.187: 30.6963% ( 983) 00:19:32.678 3.187 - 3.200: 37.2385% ( 1104) 00:19:32.678 3.200 - 3.213: 44.3970% ( 1208) 00:19:32.678 3.213 - 3.226: 51.9170% ( 1269) 00:19:32.678 3.226 - 3.238: 57.2207% ( 895) 00:19:32.678 3.238 - 3.251: 60.8237% ( 608) 00:19:32.678 3.251 - 3.264: 63.1052% ( 385) 00:19:32.678 3.264 - 3.277: 65.5822% ( 418) 00:19:32.678 3.277 - 3.302: 70.1867% ( 777) 00:19:32.678 3.302 - 3.328: 74.6193% ( 748) 00:19:32.678 3.328 - 3.354: 81.8252% ( 1216) 00:19:32.678 3.354 - 3.379: 86.9333% ( 862) 00:19:32.678 3.379 - 3.405: 88.3496% ( 239) 00:19:32.678 3.405 - 3.430: 89.0726% ( 122) 00:19:32.678 3.430 - 3.456: 89.9793% ( 153) 00:19:32.678 3.456 - 3.482: 91.4370% ( 246) 00:19:32.678 3.482 - 3.507: 93.2148% ( 300) 00:19:32.678 3.507 - 3.533: 95.0815% ( 315) 00:19:32.678 3.533 - 3.558: 96.1185% ( 175) 00:19:32.678 3.558 - 3.584: 97.0430% ( 156) 00:19:32.678 3.584 - 3.610: 98.0444% ( 169) 00:19:32.678 3.610 - 3.635: 98.8681% ( 139) 00:19:32.678 3.635 - 3.661: 99.1704% ( 51) 00:19:32.678 3.661 - 3.686: 99.4252% ( 43) 00:19:32.678 3.686 - 3.712: 99.5793% ( 26) 00:19:32.678 3.712 - 3.738: 99.6978% ( 20) 00:19:32.678 3.738 - 3.763: 99.7037% ( 1) 00:19:32.678 3.763 - 3.789: 99.7156% ( 2) 00:19:32.678 3.789 - 3.814: 99.7274% ( 2) 00:19:32.678 3.840 - 3.866: 99.7333% ( 1) 00:19:32.678 5.811 - 5.837: 99.7393% ( 1) 00:19:32.678 5.888 - 5.914: 99.7452% ( 1) 00:19:32.678 6.067 - 6.093: 99.7570% ( 2) 00:19:32.678 6.118 - 6.144: 99.7689% ( 2) 00:19:32.678 6.170 - 6.195: 99.7807% ( 2) 00:19:32.678 6.349 - 6.374: 99.7867% ( 1) 00:19:32.678 6.374 - 6.400: 99.7926% ( 1) 00:19:32.678 6.400 - 6.426: 99.7985% ( 1) 00:19:32.678 6.554 - 6.605: 99.8163% ( 3) 00:19:32.678 6.656 - 6.707: 99.8281% ( 2) 00:19:32.678 6.707 - 6.758: 99.8341% ( 1) 00:19:32.678 6.810 - 6.861: 99.8459% ( 2) 00:19:32.678 6.861 - 6.912: 99.8578% ( 2) 00:19:32.678 6.912 - 6.963: 99.8696% ( 2) 00:19:32.678 6.963 - 7.014: 99.8756% ( 1) 00:19:32.678 7.014 - 7.066: 99.8815% ( 1) 00:19:32.678 7.117 - 7.168: 99.8874% ( 1) 00:19:32.678 7.168 - 7.219: 99.9052% ( 3) 00:19:32.678 7.322 - 7.373: 99.9111% ( 1) 00:19:32.678 7.475 - 7.526: 99.9230% ( 2) 00:19:32.678 7.936 - 7.987: 99.9289% ( 1) 00:19:32.678 8.038 - 8.090: 99.9348% ( 1) 00:19:32.678 11.008 - 11.059: 99.9407% ( 1) 00:19:32.678 11.827 - 11.878: 99.9467% ( 1) 00:19:32.678 12.032 - 12.083: 99.9526% ( 1) 00:19:32.678 14.541 - 14.643: 99.9585% ( 1) 00:19:32.678 3984.589 - 4010.803: 100.0000% ( 7) 00:19:32.678 00:19:32.678 Complete histogram 00:19:32.678 ================== 00:19:32.678 Range in us Cumulative Count 00:19:32.678 1.651 - 1.664: 0.0059% ( 1) 00:19:32.678 1.664 - 1.677: 0.0593% ( 9) 00:19:32.678 1.677 - 1.690: 0.1126% ( 9) 00:19:32.678 1.690 - 1.702: 0.1244% ( 2) 00:19:32.678 1.702 - 1.715: 4.1185% ( 674) 00:19:32.678 1.715 - 1.728: 25.4044% ( 3592) 00:19:32.678 1.728 - 1.741: 33.8193% ( 1420) 00:19:32.678 1.741 - 1.754: 36.0770% ( 381) 00:19:32.937 1.754 - [2024-07-25 13:47:29.566570] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:32.937 1.766: 39.4133% ( 563) 00:19:32.937 1.766 - 1.779: 68.8652% ( 4970) 00:19:32.937 1.779 - 1.792: 91.7037% ( 3854) 00:19:32.937 1.792 - 1.805: 96.1244% ( 746) 00:19:32.937 1.805 - 1.818: 97.9793% ( 313) 00:19:32.937 1.818 - 1.830: 98.2696% ( 49) 00:19:32.937 1.830 - 1.843: 98.5896% ( 54) 00:19:32.937 1.843 - 1.856: 98.9156% ( 55) 00:19:32.937 1.856 - 1.869: 99.1644% ( 42) 00:19:32.937 1.869 - 1.882: 99.2415% ( 13) 00:19:32.937 1.882 - 1.894: 99.2830% ( 7) 00:19:32.937 1.894 - 1.907: 99.3007% ( 3) 00:19:32.937 1.907 - 1.920: 99.3363% ( 6) 00:19:32.937 1.920 - 1.933: 99.3481% ( 2) 00:19:32.937 1.933 - 1.946: 99.3719% ( 4) 00:19:32.937 1.984 - 1.997: 99.3778% ( 1) 00:19:32.937 4.352 - 4.378: 99.3837% ( 1) 00:19:32.937 4.506 - 4.531: 99.3896% ( 1) 00:19:32.937 4.685 - 4.710: 99.3956% ( 1) 00:19:32.937 4.838 - 4.864: 99.4015% ( 1) 00:19:32.938 4.915 - 4.941: 99.4074% ( 1) 00:19:32.938 4.992 - 5.018: 99.4133% ( 1) 00:19:32.938 5.299 - 5.325: 99.4193% ( 1) 00:19:32.938 5.427 - 5.453: 99.4252% ( 1) 00:19:32.938 5.786 - 5.811: 99.4311% ( 1) 00:19:32.938 5.965 - 5.990: 99.4370% ( 1) 00:19:32.938 6.016 - 6.042: 99.4489% ( 2) 00:19:32.938 6.118 - 6.144: 99.4548% ( 1) 00:19:32.938 6.144 - 6.170: 99.4607% ( 1) 00:19:32.938 6.451 - 6.477: 99.4667% ( 1) 00:19:32.938 6.605 - 6.656: 99.4726% ( 1) 00:19:32.938 6.707 - 6.758: 99.4785% ( 1) 00:19:32.938 6.963 - 7.014: 99.4904% ( 2) 00:19:32.938 10.598 - 10.650: 99.4963% ( 1) 00:19:32.938 17.306 - 17.408: 99.5022% ( 1) 00:19:32.938 17.613 - 17.715: 99.5081% ( 1) 00:19:32.938 3827.302 - 3853.517: 99.5141% ( 1) 00:19:32.938 3984.589 - 4010.803: 100.0000% ( 82) 00:19:32.938 00:19:32.938 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:19:32.938 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:32.938 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:19:32.938 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:19:32.938 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:32.938 [ 00:19:32.938 { 00:19:32.938 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:32.938 "subtype": "Discovery", 00:19:32.938 "listen_addresses": [], 00:19:32.938 "allow_any_host": true, 00:19:32.938 "hosts": [] 00:19:32.938 }, 00:19:32.938 { 00:19:32.938 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:32.938 "subtype": "NVMe", 00:19:32.938 "listen_addresses": [ 00:19:32.938 { 00:19:32.938 "trtype": "VFIOUSER", 00:19:32.938 "adrfam": "IPv4", 00:19:32.938 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:32.938 "trsvcid": "0" 00:19:32.938 } 00:19:32.938 ], 00:19:32.938 "allow_any_host": true, 00:19:32.938 "hosts": [], 00:19:32.938 "serial_number": "SPDK1", 00:19:32.938 "model_number": "SPDK bdev Controller", 00:19:32.938 "max_namespaces": 32, 00:19:32.938 "min_cntlid": 1, 00:19:32.938 "max_cntlid": 65519, 00:19:32.938 "namespaces": [ 00:19:32.938 { 00:19:32.938 "nsid": 1, 00:19:32.938 "bdev_name": "Malloc1", 00:19:32.938 "name": "Malloc1", 00:19:32.938 "nguid": "6607F8F0313E45C7B65E0CA198C66D0A", 00:19:32.938 "uuid": "6607f8f0-313e-45c7-b65e-0ca198c66d0a" 00:19:32.938 }, 00:19:32.938 { 00:19:32.938 "nsid": 2, 00:19:32.938 "bdev_name": "Malloc3", 00:19:32.938 "name": "Malloc3", 00:19:32.938 "nguid": "757ED9EE759540289BC1B34554E81AE6", 00:19:32.938 "uuid": "757ed9ee-7595-4028-9bc1-b34554e81ae6" 00:19:32.938 } 00:19:32.938 ] 00:19:32.938 }, 00:19:32.938 { 00:19:32.938 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:32.938 "subtype": "NVMe", 00:19:32.938 "listen_addresses": [ 00:19:32.938 { 00:19:32.938 "trtype": "VFIOUSER", 00:19:32.938 "adrfam": "IPv4", 00:19:32.938 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:32.938 "trsvcid": "0" 00:19:32.938 } 00:19:32.938 ], 00:19:32.938 "allow_any_host": true, 00:19:32.938 "hosts": [], 00:19:32.938 "serial_number": "SPDK2", 00:19:32.938 "model_number": "SPDK bdev Controller", 00:19:32.938 "max_namespaces": 32, 00:19:32.938 "min_cntlid": 1, 00:19:32.938 "max_cntlid": 65519, 00:19:32.938 "namespaces": [ 00:19:32.938 { 00:19:32.938 "nsid": 1, 00:19:32.938 "bdev_name": "Malloc2", 00:19:32.938 "name": "Malloc2", 00:19:32.938 "nguid": "4B146C8FEC724E6D87A194608314193F", 00:19:32.938 "uuid": "4b146c8f-ec72-4e6d-87a1-94608314193f" 00:19:32.938 } 00:19:32.938 ] 00:19:32.938 } 00:19:32.938 ] 00:19:32.938 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:32.938 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:19:32.938 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=284171 00:19:32.938 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:32.938 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:19:32.938 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:32.938 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:32.938 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:19:32.938 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:32.938 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:19:33.196 EAL: No free 2048 kB hugepages reported on node 1 00:19:33.196 [2024-07-25 13:47:29.938987] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:33.196 Malloc4 00:19:33.196 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:19:33.455 [2024-07-25 13:47:30.143489] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:33.455 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:33.455 Asynchronous Event Request test 00:19:33.455 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:33.455 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:33.455 Registering asynchronous event callbacks... 00:19:33.455 Starting namespace attribute notice tests for all controllers... 00:19:33.455 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:33.455 aer_cb - Changed Namespace 00:19:33.455 Cleaning up... 00:19:33.455 [ 00:19:33.455 { 00:19:33.455 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:33.455 "subtype": "Discovery", 00:19:33.455 "listen_addresses": [], 00:19:33.455 "allow_any_host": true, 00:19:33.455 "hosts": [] 00:19:33.455 }, 00:19:33.455 { 00:19:33.455 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:33.455 "subtype": "NVMe", 00:19:33.455 "listen_addresses": [ 00:19:33.455 { 00:19:33.455 "trtype": "VFIOUSER", 00:19:33.455 "adrfam": "IPv4", 00:19:33.455 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:33.455 "trsvcid": "0" 00:19:33.455 } 00:19:33.455 ], 00:19:33.455 "allow_any_host": true, 00:19:33.455 "hosts": [], 00:19:33.455 "serial_number": "SPDK1", 00:19:33.455 "model_number": "SPDK bdev Controller", 00:19:33.455 "max_namespaces": 32, 00:19:33.455 "min_cntlid": 1, 00:19:33.455 "max_cntlid": 65519, 00:19:33.455 "namespaces": [ 00:19:33.455 { 00:19:33.455 "nsid": 1, 00:19:33.455 "bdev_name": "Malloc1", 00:19:33.455 "name": "Malloc1", 00:19:33.455 "nguid": "6607F8F0313E45C7B65E0CA198C66D0A", 00:19:33.455 "uuid": "6607f8f0-313e-45c7-b65e-0ca198c66d0a" 00:19:33.455 }, 00:19:33.455 { 00:19:33.455 "nsid": 2, 00:19:33.455 "bdev_name": "Malloc3", 00:19:33.455 "name": "Malloc3", 00:19:33.455 "nguid": "757ED9EE759540289BC1B34554E81AE6", 00:19:33.455 "uuid": "757ed9ee-7595-4028-9bc1-b34554e81ae6" 00:19:33.455 } 00:19:33.455 ] 00:19:33.455 }, 00:19:33.455 { 00:19:33.455 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:33.455 "subtype": "NVMe", 00:19:33.455 "listen_addresses": [ 00:19:33.455 { 00:19:33.455 "trtype": "VFIOUSER", 00:19:33.455 "adrfam": "IPv4", 00:19:33.455 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:33.455 "trsvcid": "0" 00:19:33.455 } 00:19:33.455 ], 00:19:33.455 "allow_any_host": true, 00:19:33.455 "hosts": [], 00:19:33.455 "serial_number": "SPDK2", 00:19:33.455 "model_number": "SPDK bdev Controller", 00:19:33.455 "max_namespaces": 32, 00:19:33.455 "min_cntlid": 1, 00:19:33.455 "max_cntlid": 65519, 00:19:33.455 "namespaces": [ 00:19:33.455 { 00:19:33.455 "nsid": 1, 00:19:33.455 "bdev_name": "Malloc2", 00:19:33.455 "name": "Malloc2", 00:19:33.455 "nguid": "4B146C8FEC724E6D87A194608314193F", 00:19:33.455 "uuid": "4b146c8f-ec72-4e6d-87a1-94608314193f" 00:19:33.455 }, 00:19:33.455 { 00:19:33.455 "nsid": 2, 00:19:33.455 "bdev_name": "Malloc4", 00:19:33.455 "name": "Malloc4", 00:19:33.455 "nguid": "D068402B56B74FCAA781A591C1AA25EE", 00:19:33.455 "uuid": "d068402b-56b7-4fca-a781-a591c1aa25ee" 00:19:33.455 } 00:19:33.455 ] 00:19:33.455 } 00:19:33.455 ] 00:19:33.715 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 284171 00:19:33.715 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:19:33.715 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 276479 00:19:33.715 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 276479 ']' 00:19:33.715 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 276479 00:19:33.715 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:19:33.715 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:33.715 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 276479 00:19:33.715 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:33.715 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:33.715 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 276479' 00:19:33.715 killing process with pid 276479 00:19:33.715 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 276479 00:19:33.715 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 276479 00:19:33.975 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:33.975 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:33.975 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:19:33.975 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:19:33.975 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:19:33.975 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=284436 00:19:33.975 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 284436' 00:19:33.975 Process pid: 284436 00:19:33.975 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:19:33.975 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:33.975 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 284436 00:19:33.975 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 284436 ']' 00:19:33.975 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.975 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:33.975 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.975 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:33.975 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:33.975 [2024-07-25 13:47:30.721717] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:19:33.975 [2024-07-25 13:47:30.722639] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:19:33.975 [2024-07-25 13:47:30.722680] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.975 EAL: No free 2048 kB hugepages reported on node 1 00:19:33.975 [2024-07-25 13:47:30.757596] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:33.975 [2024-07-25 13:47:30.793305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:33.975 [2024-07-25 13:47:30.832355] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:33.975 [2024-07-25 13:47:30.832397] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:33.975 [2024-07-25 13:47:30.832407] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:33.975 [2024-07-25 13:47:30.832416] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:33.975 [2024-07-25 13:47:30.832423] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:33.975 [2024-07-25 13:47:30.832473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.975 [2024-07-25 13:47:30.832568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.975 [2024-07-25 13:47:30.832656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:33.975 [2024-07-25 13:47:30.832657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.234 [2024-07-25 13:47:30.902839] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:19:34.234 [2024-07-25 13:47:30.902938] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:19:34.234 [2024-07-25 13:47:30.903112] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:19:34.234 [2024-07-25 13:47:30.903484] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:19:34.234 [2024-07-25 13:47:30.903741] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:19:34.801 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:34.801 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:19:34.801 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:35.736 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:19:35.994 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:35.994 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:35.994 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:35.994 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:35.994 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:36.254 Malloc1 00:19:36.254 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:36.254 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:36.513 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:36.772 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:36.772 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:36.772 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:36.772 Malloc2 00:19:36.772 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:37.031 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:37.290 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:37.549 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:37.549 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 284436 00:19:37.549 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 284436 ']' 00:19:37.549 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 284436 00:19:37.549 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:19:37.549 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:37.549 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 284436 00:19:37.549 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:37.549 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:37.549 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 284436' 00:19:37.549 killing process with pid 284436 00:19:37.549 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 284436 00:19:37.549 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 284436 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:37.808 00:19:37.808 real 0m51.450s 00:19:37.808 user 3m22.694s 00:19:37.808 sys 0m4.848s 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:37.808 ************************************ 00:19:37.808 END TEST nvmf_vfio_user 00:19:37.808 ************************************ 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:37.808 ************************************ 00:19:37.808 START TEST nvmf_vfio_user_nvme_compliance 00:19:37.808 ************************************ 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:37.808 * Looking for test storage... 00:19:37.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:37.808 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=285051 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 285051' 00:19:37.809 Process pid: 285051 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 285051 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 285051 ']' 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:37.809 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:38.068 [2024-07-25 13:47:34.705785] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:19:38.068 [2024-07-25 13:47:34.705843] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.068 EAL: No free 2048 kB hugepages reported on node 1 00:19:38.068 [2024-07-25 13:47:34.744047] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:38.068 [2024-07-25 13:47:34.778212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:38.068 [2024-07-25 13:47:34.816970] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.068 [2024-07-25 13:47:34.817012] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.068 [2024-07-25 13:47:34.817022] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.068 [2024-07-25 13:47:34.817031] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.068 [2024-07-25 13:47:34.817042] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.068 [2024-07-25 13:47:34.817085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:38.068 [2024-07-25 13:47:34.817181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:38.068 [2024-07-25 13:47:34.817183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.636 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:38.636 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:19:38.636 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:40.015 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:40.015 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:40.015 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:40.015 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.015 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:40.015 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.015 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:40.015 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:40.015 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.015 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:40.015 malloc0 00:19:40.015 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.015 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:40.015 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.015 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:40.015 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.015 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:40.015 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.015 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:40.015 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.015 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:40.015 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.015 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:40.015 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.015 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:40.015 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.015 00:19:40.015 00:19:40.015 CUnit - A unit testing framework for C - Version 2.1-3 00:19:40.015 http://cunit.sourceforge.net/ 00:19:40.015 00:19:40.015 00:19:40.015 Suite: nvme_compliance 00:19:40.015 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-25 13:47:36.743186] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:40.015 [2024-07-25 13:47:36.744512] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:40.015 [2024-07-25 13:47:36.744529] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:40.015 [2024-07-25 13:47:36.744537] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:40.015 [2024-07-25 13:47:36.746207] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:40.015 passed 00:19:40.015 Test: admin_identify_ctrlr_verify_fused ...[2024-07-25 13:47:36.824729] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:40.015 [2024-07-25 13:47:36.827750] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:40.015 passed 00:19:40.274 Test: admin_identify_ns ...[2024-07-25 13:47:36.906846] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:40.274 [2024-07-25 13:47:36.968724] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:40.274 [2024-07-25 13:47:36.976726] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:40.274 [2024-07-25 13:47:36.997820] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:40.274 passed 00:19:40.274 Test: admin_get_features_mandatory_features ...[2024-07-25 13:47:37.071231] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:40.274 [2024-07-25 13:47:37.074249] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:40.274 passed 00:19:40.274 Test: admin_get_features_optional_features ...[2024-07-25 13:47:37.149772] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:40.274 [2024-07-25 13:47:37.155793] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:40.533 passed 00:19:40.533 Test: admin_set_features_number_of_queues ...[2024-07-25 13:47:37.227831] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:40.533 [2024-07-25 13:47:37.332811] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:40.533 passed 00:19:40.533 Test: admin_get_log_page_mandatory_logs ...[2024-07-25 13:47:37.408141] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:40.533 [2024-07-25 13:47:37.411159] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:40.792 passed 00:19:40.792 Test: admin_get_log_page_with_lpo ...[2024-07-25 13:47:37.485675] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:40.792 [2024-07-25 13:47:37.553724] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:40.792 [2024-07-25 13:47:37.566795] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:40.792 passed 00:19:40.792 Test: fabric_property_get ...[2024-07-25 13:47:37.639185] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:40.792 [2024-07-25 13:47:37.640437] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:40.792 [2024-07-25 13:47:37.642209] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:40.792 passed 00:19:41.051 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-25 13:47:37.718685] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:41.051 [2024-07-25 13:47:37.719932] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:41.052 [2024-07-25 13:47:37.721709] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:41.052 passed 00:19:41.052 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-25 13:47:37.796792] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:41.052 [2024-07-25 13:47:37.882732] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:41.052 [2024-07-25 13:47:37.900720] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:41.052 [2024-07-25 13:47:37.905814] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:41.052 passed 00:19:41.310 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-25 13:47:37.976284] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:41.310 [2024-07-25 13:47:37.977502] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:41.310 [2024-07-25 13:47:37.979303] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:41.310 passed 00:19:41.310 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-25 13:47:38.054846] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:41.310 [2024-07-25 13:47:38.131723] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:41.310 [2024-07-25 13:47:38.155723] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:41.310 [2024-07-25 13:47:38.160828] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:41.310 passed 00:19:41.570 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-25 13:47:38.236133] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:41.570 [2024-07-25 13:47:38.237366] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:41.570 [2024-07-25 13:47:38.237391] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:41.570 [2024-07-25 13:47:38.239160] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:41.570 passed 00:19:41.570 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-25 13:47:38.313834] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:41.570 [2024-07-25 13:47:38.406722] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:41.570 [2024-07-25 13:47:38.414721] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:41.570 [2024-07-25 13:47:38.422726] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:41.570 [2024-07-25 13:47:38.430732] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:41.829 [2024-07-25 13:47:38.459805] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:41.829 passed 00:19:41.829 Test: admin_create_io_sq_verify_pc ...[2024-07-25 13:47:38.535094] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:41.829 [2024-07-25 13:47:38.550728] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:41.829 [2024-07-25 13:47:38.568265] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:41.829 passed 00:19:41.829 Test: admin_create_io_qp_max_qps ...[2024-07-25 13:47:38.647847] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:43.209 [2024-07-25 13:47:39.760726] nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:19:43.468 [2024-07-25 13:47:40.142219] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:43.468 passed 00:19:43.468 Test: admin_create_io_sq_shared_cq ...[2024-07-25 13:47:40.216832] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:43.468 [2024-07-25 13:47:40.349722] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:43.726 [2024-07-25 13:47:40.386809] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:43.726 passed 00:19:43.726 00:19:43.726 Run Summary: Type Total Ran Passed Failed Inactive 00:19:43.726 suites 1 1 n/a 0 0 00:19:43.726 tests 18 18 18 0 0 00:19:43.726 asserts 360 360 360 0 n/a 00:19:43.726 00:19:43.726 Elapsed time = 1.499 seconds 00:19:43.726 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 285051 00:19:43.726 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 285051 ']' 00:19:43.726 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 285051 00:19:43.726 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:19:43.726 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:43.726 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 285051 00:19:43.726 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:43.727 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:43.727 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 285051' 00:19:43.727 killing process with pid 285051 00:19:43.727 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 285051 00:19:43.727 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 285051 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:43.986 00:19:43.986 real 0m6.138s 00:19:43.986 user 0m17.402s 00:19:43.986 sys 0m0.700s 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:43.986 ************************************ 00:19:43.986 END TEST nvmf_vfio_user_nvme_compliance 00:19:43.986 ************************************ 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:43.986 ************************************ 00:19:43.986 START TEST nvmf_vfio_user_fuzz 00:19:43.986 ************************************ 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:43.986 * Looking for test storage... 00:19:43.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:43.986 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:44.246 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:44.246 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:44.246 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:44.246 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:44.246 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:44.246 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:44.246 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:44.246 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=286157 00:19:44.246 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 286157' 00:19:44.246 Process pid: 286157 00:19:44.246 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:44.246 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:44.246 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 286157 00:19:44.246 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 286157 ']' 00:19:44.246 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.246 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:44.246 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.246 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:44.246 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:44.246 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:44.246 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:19:44.246 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:45.623 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:45.623 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.623 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:45.623 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.623 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:45.623 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:45.623 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.623 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:45.623 malloc0 00:19:45.623 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.623 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:45.624 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.624 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:45.624 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.624 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:45.624 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.624 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:45.624 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.624 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:45.624 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.624 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:45.624 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.624 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:45.624 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:20:17.785 Fuzzing completed. Shutting down the fuzz application 00:20:17.785 00:20:17.785 Dumping successful admin opcodes: 00:20:17.785 8, 9, 10, 24, 00:20:17.785 Dumping successful io opcodes: 00:20:17.785 0, 00:20:17.785 NS: 0x200003a1ef00 I/O qp, Total commands completed: 888725, total successful commands: 3459, random_seed: 3403597184 00:20:17.785 NS: 0x200003a1ef00 admin qp, Total commands completed: 217785, total successful commands: 1750, random_seed: 821806272 00:20:17.785 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:20:17.785 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.785 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:17.785 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.785 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 286157 00:20:17.785 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 286157 ']' 00:20:17.785 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 286157 00:20:17.785 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:20:17.785 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:17.785 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 286157 00:20:17.785 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:17.785 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:17.785 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 286157' 00:20:17.785 killing process with pid 286157 00:20:17.785 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 286157 00:20:17.785 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 286157 00:20:17.785 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:20:17.785 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:20:17.785 00:20:17.785 real 0m32.155s 00:20:17.785 user 0m28.999s 00:20:17.785 sys 0m32.460s 00:20:17.785 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:17.785 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:17.785 ************************************ 00:20:17.785 END TEST nvmf_vfio_user_fuzz 00:20:17.785 ************************************ 00:20:17.785 13:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:17.785 13:48:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:17.785 13:48:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:17.785 13:48:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:17.785 ************************************ 00:20:17.785 START TEST nvmf_auth_target 00:20:17.785 ************************************ 00:20:17.785 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:17.785 * Looking for test storage... 00:20:17.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:17.785 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:17.785 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:17.785 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:17.785 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:17.785 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:17.785 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:17.785 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:17.785 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:17.785 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:17.785 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:17.785 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:17.785 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:17.785 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:17.785 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:17.785 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:17.785 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:17.785 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:17.785 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:17.785 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:17.785 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:17.785 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.785 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:20:17.786 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.983 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:21.983 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:20:21.983 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:21.983 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:21.983 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:21.983 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:21.983 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:21.983 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:20:21.983 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:21.983 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:20:21.983 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:20:21.983 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:20:21.983 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:20:21.983 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:20:21.983 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:21.984 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:21.984 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:21.984 Found net devices under 0000:af:00.0: cvl_0_0 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:21.984 Found net devices under 0000:af:00.1: cvl_0_1 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:21.984 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:22.244 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:22.244 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:22.244 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:22.244 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:22.244 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:22.244 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:22.244 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:22.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:22.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:20:22.244 00:20:22.244 --- 10.0.0.2 ping statistics --- 00:20:22.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.244 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:20:22.244 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:22.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:22.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:20:22.244 00:20:22.244 --- 10.0.0.1 ping statistics --- 00:20:22.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.244 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:20:22.244 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:22.244 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:20:22.244 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:22.245 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:22.245 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:22.245 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:22.245 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:22.245 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:22.245 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:22.504 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:20:22.504 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:22.504 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:22.504 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.504 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=295304 00:20:22.504 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:22.504 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 295304 00:20:22.504 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 295304 ']' 00:20:22.504 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.504 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:22.504 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.504 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:22.504 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=295328 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=31d8a93294a4f4ef81461286bb15d33a066ccfa690fa7f24 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.xRn 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 31d8a93294a4f4ef81461286bb15d33a066ccfa690fa7f24 0 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 31d8a93294a4f4ef81461286bb15d33a066ccfa690fa7f24 0 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=31d8a93294a4f4ef81461286bb15d33a066ccfa690fa7f24 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.xRn 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.xRn 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.xRn 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5cbdc983fadb0d16860b3d4cac622760f9e4bc5dc4c88e58984a10ac66eee453 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.0dK 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5cbdc983fadb0d16860b3d4cac622760f9e4bc5dc4c88e58984a10ac66eee453 3 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5cbdc983fadb0d16860b3d4cac622760f9e4bc5dc4c88e58984a10ac66eee453 3 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5cbdc983fadb0d16860b3d4cac622760f9e4bc5dc4c88e58984a10ac66eee453 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.0dK 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.0dK 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.0dK 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2f02ff0de82a6caecaa7971e5e66c93a 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.s0b 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2f02ff0de82a6caecaa7971e5e66c93a 1 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2f02ff0de82a6caecaa7971e5e66c93a 1 00:20:22.764 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:22.765 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:22.765 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2f02ff0de82a6caecaa7971e5e66c93a 00:20:22.765 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:20:22.765 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:22.765 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.s0b 00:20:22.765 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.s0b 00:20:22.765 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.s0b 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d082392f87c1456a4e9d68536ea51c76656a5904b45a85ae 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.uQU 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d082392f87c1456a4e9d68536ea51c76656a5904b45a85ae 2 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d082392f87c1456a4e9d68536ea51c76656a5904b45a85ae 2 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d082392f87c1456a4e9d68536ea51c76656a5904b45a85ae 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.uQU 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.uQU 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.uQU 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=506b01917c91364544da357303a449104fc31e1906509ea2 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.E2V 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 506b01917c91364544da357303a449104fc31e1906509ea2 2 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 506b01917c91364544da357303a449104fc31e1906509ea2 2 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=506b01917c91364544da357303a449104fc31e1906509ea2 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.E2V 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.E2V 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.E2V 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9c8acd1816fb0f8920d3dade2e7c6f92 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.QnR 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9c8acd1816fb0f8920d3dade2e7c6f92 1 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9c8acd1816fb0f8920d3dade2e7c6f92 1 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:23.024 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9c8acd1816fb0f8920d3dade2e7c6f92 00:20:23.025 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:20:23.025 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:23.025 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.QnR 00:20:23.025 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.QnR 00:20:23.025 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.QnR 00:20:23.025 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:20:23.025 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:23.025 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:23.025 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:23.025 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:20:23.025 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:20:23.025 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:23.025 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3083fe9df5c183be214480c1ad798fce99d5f06c3ec16f75f78c65e834c85563 00:20:23.025 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:23.025 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ghv 00:20:23.025 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3083fe9df5c183be214480c1ad798fce99d5f06c3ec16f75f78c65e834c85563 3 00:20:23.025 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3083fe9df5c183be214480c1ad798fce99d5f06c3ec16f75f78c65e834c85563 3 00:20:23.025 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:23.025 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:23.025 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3083fe9df5c183be214480c1ad798fce99d5f06c3ec16f75f78c65e834c85563 00:20:23.025 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:20:23.025 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:23.025 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ghv 00:20:23.025 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ghv 00:20:23.025 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.ghv 00:20:23.025 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:20:23.284 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 295304 00:20:23.284 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 295304 ']' 00:20:23.284 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.284 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:23.284 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.284 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:23.284 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.284 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:23.284 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:23.284 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 295328 /var/tmp/host.sock 00:20:23.284 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 295328 ']' 00:20:23.284 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:20:23.284 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:23.285 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:23.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:23.285 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:23.285 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.544 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:23.544 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:23.544 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:20:23.544 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.544 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.544 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.544 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:23.544 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.xRn 00:20:23.544 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.544 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.544 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.544 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.xRn 00:20:23.544 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.xRn 00:20:23.803 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.0dK ]] 00:20:23.803 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0dK 00:20:23.803 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.803 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.803 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.803 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0dK 00:20:23.803 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0dK 00:20:23.803 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:23.803 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.s0b 00:20:23.803 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.803 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.803 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.803 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.s0b 00:20:23.803 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.s0b 00:20:24.062 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.uQU ]] 00:20:24.062 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uQU 00:20:24.062 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.062 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.062 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.062 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uQU 00:20:24.062 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uQU 00:20:24.320 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:24.320 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.E2V 00:20:24.320 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.320 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.320 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.320 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.E2V 00:20:24.320 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.E2V 00:20:24.580 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.QnR ]] 00:20:24.580 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.QnR 00:20:24.580 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.580 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.580 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.580 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.QnR 00:20:24.580 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.QnR 00:20:24.580 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:24.580 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ghv 00:20:24.580 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.580 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.580 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.580 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ghv 00:20:24.580 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ghv 00:20:24.840 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:20:24.840 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:24.840 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.840 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.840 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:24.840 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:25.099 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:20:25.099 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.099 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:25.099 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:25.099 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:25.099 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.099 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.099 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.099 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.100 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.100 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.100 13:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.360 00:20:25.360 13:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.360 13:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.360 13:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.360 13:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.360 13:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.360 13:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.360 13:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.360 13:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.360 13:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.360 { 00:20:25.360 "cntlid": 1, 00:20:25.360 "qid": 0, 00:20:25.360 "state": "enabled", 00:20:25.360 "thread": "nvmf_tgt_poll_group_000", 00:20:25.360 "listen_address": { 00:20:25.360 "trtype": "TCP", 00:20:25.360 "adrfam": "IPv4", 00:20:25.360 "traddr": "10.0.0.2", 00:20:25.360 "trsvcid": "4420" 00:20:25.360 }, 00:20:25.360 "peer_address": { 00:20:25.360 "trtype": "TCP", 00:20:25.360 "adrfam": "IPv4", 00:20:25.360 "traddr": "10.0.0.1", 00:20:25.360 "trsvcid": "41810" 00:20:25.360 }, 00:20:25.360 "auth": { 00:20:25.360 "state": "completed", 00:20:25.360 "digest": "sha256", 00:20:25.360 "dhgroup": "null" 00:20:25.360 } 00:20:25.360 } 00:20:25.360 ]' 00:20:25.360 13:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.360 13:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:25.620 13:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.620 13:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:25.620 13:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.620 13:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.621 13:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.621 13:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.621 13:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MzFkOGE5MzI5NGE0ZjRlZjgxNDYxMjg2YmIxNWQzM2EwNjZjY2ZhNjkwZmE3ZjI0MLzYDQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiZGM5ODNmYWRiMGQxNjg2MGIzZDRjYWM2MjI3NjBmOWU0YmM1ZGM0Yzg4ZTU4OTg0YTEwYWM2NmVlZTQ1M87WMIU=: 00:20:26.191 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.191 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:26.191 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.191 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.191 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.191 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.191 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:26.191 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:26.451 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:20:26.451 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.451 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:26.451 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:26.451 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:26.451 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.451 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.451 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.451 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.451 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.451 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.451 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.709 00:20:26.709 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:26.709 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:26.710 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.968 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.968 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.968 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.968 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.968 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.968 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.968 { 00:20:26.968 "cntlid": 3, 00:20:26.968 "qid": 0, 00:20:26.968 "state": "enabled", 00:20:26.968 "thread": "nvmf_tgt_poll_group_000", 00:20:26.968 "listen_address": { 00:20:26.968 "trtype": "TCP", 00:20:26.968 "adrfam": "IPv4", 00:20:26.968 "traddr": "10.0.0.2", 00:20:26.968 "trsvcid": "4420" 00:20:26.968 }, 00:20:26.968 "peer_address": { 00:20:26.968 "trtype": "TCP", 00:20:26.968 "adrfam": "IPv4", 00:20:26.968 "traddr": "10.0.0.1", 00:20:26.968 "trsvcid": "41838" 00:20:26.968 }, 00:20:26.968 "auth": { 00:20:26.968 "state": "completed", 00:20:26.968 "digest": "sha256", 00:20:26.968 "dhgroup": "null" 00:20:26.968 } 00:20:26.968 } 00:20:26.968 ]' 00:20:26.968 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.968 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.968 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.968 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:26.968 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.968 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.968 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.968 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.227 13:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MmYwMmZmMGRlODJhNmNhZWNhYTc5NzFlNWU2NmM5M2Ha7+5C: --dhchap-ctrl-secret DHHC-1:02:ZDA4MjM5MmY4N2MxNDU2YTRlOWQ2ODUzNmVhNTFjNzY2NTZhNTkwNGI0NWE4NWFlSXSZMQ==: 00:20:27.797 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.797 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:27.797 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.797 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.797 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.797 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.797 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:27.797 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:27.797 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:20:27.797 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.797 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:27.797 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:27.797 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:27.797 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.797 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.797 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.797 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.057 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.057 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.057 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.057 00:20:28.057 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.057 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.057 13:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.317 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.317 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.317 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.317 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.317 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.317 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.317 { 00:20:28.317 "cntlid": 5, 00:20:28.317 "qid": 0, 00:20:28.317 "state": "enabled", 00:20:28.317 "thread": "nvmf_tgt_poll_group_000", 00:20:28.317 "listen_address": { 00:20:28.317 "trtype": "TCP", 00:20:28.317 "adrfam": "IPv4", 00:20:28.317 "traddr": "10.0.0.2", 00:20:28.317 "trsvcid": "4420" 00:20:28.317 }, 00:20:28.317 "peer_address": { 00:20:28.317 "trtype": "TCP", 00:20:28.317 "adrfam": "IPv4", 00:20:28.317 "traddr": "10.0.0.1", 00:20:28.317 "trsvcid": "41862" 00:20:28.317 }, 00:20:28.317 "auth": { 00:20:28.317 "state": "completed", 00:20:28.317 "digest": "sha256", 00:20:28.317 "dhgroup": "null" 00:20:28.317 } 00:20:28.317 } 00:20:28.317 ]' 00:20:28.317 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.317 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:28.317 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.578 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:28.578 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.578 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.578 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.578 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.578 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTA2YjAxOTE3YzkxMzY0NTQ0ZGEzNTczMDNhNDQ5MTA0ZmMzMWUxOTA2NTA5ZWEyc3lZCg==: --dhchap-ctrl-secret DHHC-1:01:OWM4YWNkMTgxNmZiMGY4OTIwZDNkYWRlMmU3YzZmOTLzx0Fz: 00:20:29.147 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.147 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:29.147 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.147 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.147 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.147 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.147 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:29.147 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:29.406 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:20:29.406 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.406 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:29.406 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:29.406 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:29.406 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.406 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:20:29.406 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.406 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.406 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.406 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.406 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.666 00:20:29.666 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.666 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.666 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.927 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.927 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.927 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.927 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.927 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.927 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.927 { 00:20:29.927 "cntlid": 7, 00:20:29.927 "qid": 0, 00:20:29.927 "state": "enabled", 00:20:29.927 "thread": "nvmf_tgt_poll_group_000", 00:20:29.927 "listen_address": { 00:20:29.927 "trtype": "TCP", 00:20:29.927 "adrfam": "IPv4", 00:20:29.927 "traddr": "10.0.0.2", 00:20:29.927 "trsvcid": "4420" 00:20:29.927 }, 00:20:29.927 "peer_address": { 00:20:29.927 "trtype": "TCP", 00:20:29.927 "adrfam": "IPv4", 00:20:29.927 "traddr": "10.0.0.1", 00:20:29.927 "trsvcid": "36772" 00:20:29.927 }, 00:20:29.927 "auth": { 00:20:29.927 "state": "completed", 00:20:29.927 "digest": "sha256", 00:20:29.927 "dhgroup": "null" 00:20:29.927 } 00:20:29.927 } 00:20:29.927 ]' 00:20:29.927 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.927 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:29.927 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.927 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:29.927 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.927 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.927 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.927 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.187 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzA4M2ZlOWRmNWMxODNiZTIxNDQ4MGMxYWQ3OThmY2U5OWQ1ZjA2YzNlYzE2Zjc1Zjc4YzY1ZTgzNGM4NTU2M2WfuUc=: 00:20:30.755 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.755 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:30.755 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.755 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.755 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.755 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:30.755 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.755 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:30.755 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:31.014 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:20:31.014 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.014 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:31.014 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:31.014 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:31.014 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.014 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.014 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.014 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.014 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.014 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.014 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.274 00:20:31.274 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.274 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.274 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.274 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.274 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.274 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.274 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.274 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.274 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.274 { 00:20:31.274 "cntlid": 9, 00:20:31.274 "qid": 0, 00:20:31.274 "state": "enabled", 00:20:31.274 "thread": "nvmf_tgt_poll_group_000", 00:20:31.274 "listen_address": { 00:20:31.274 "trtype": "TCP", 00:20:31.274 "adrfam": "IPv4", 00:20:31.274 "traddr": "10.0.0.2", 00:20:31.274 "trsvcid": "4420" 00:20:31.274 }, 00:20:31.274 "peer_address": { 00:20:31.274 "trtype": "TCP", 00:20:31.274 "adrfam": "IPv4", 00:20:31.274 "traddr": "10.0.0.1", 00:20:31.274 "trsvcid": "36804" 00:20:31.274 }, 00:20:31.274 "auth": { 00:20:31.274 "state": "completed", 00:20:31.274 "digest": "sha256", 00:20:31.274 "dhgroup": "ffdhe2048" 00:20:31.274 } 00:20:31.274 } 00:20:31.274 ]' 00:20:31.274 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.534 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:31.534 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.534 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:31.534 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.534 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.534 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.534 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.793 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MzFkOGE5MzI5NGE0ZjRlZjgxNDYxMjg2YmIxNWQzM2EwNjZjY2ZhNjkwZmE3ZjI0MLzYDQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiZGM5ODNmYWRiMGQxNjg2MGIzZDRjYWM2MjI3NjBmOWU0YmM1ZGM0Yzg4ZTU4OTg0YTEwYWM2NmVlZTQ1M87WMIU=: 00:20:32.362 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.362 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:32.362 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.362 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.362 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.362 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:32.362 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:32.362 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:32.362 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:20:32.362 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.362 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:32.362 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:32.362 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:32.362 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.362 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.362 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.362 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.362 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.362 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.362 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.621 00:20:32.621 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.621 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.621 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.880 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.880 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.881 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.881 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.881 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.881 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.881 { 00:20:32.881 "cntlid": 11, 00:20:32.881 "qid": 0, 00:20:32.881 "state": "enabled", 00:20:32.881 "thread": "nvmf_tgt_poll_group_000", 00:20:32.881 "listen_address": { 00:20:32.881 "trtype": "TCP", 00:20:32.881 "adrfam": "IPv4", 00:20:32.881 "traddr": "10.0.0.2", 00:20:32.881 "trsvcid": "4420" 00:20:32.881 }, 00:20:32.881 "peer_address": { 00:20:32.881 "trtype": "TCP", 00:20:32.881 "adrfam": "IPv4", 00:20:32.881 "traddr": "10.0.0.1", 00:20:32.881 "trsvcid": "36832" 00:20:32.881 }, 00:20:32.881 "auth": { 00:20:32.881 "state": "completed", 00:20:32.881 "digest": "sha256", 00:20:32.881 "dhgroup": "ffdhe2048" 00:20:32.881 } 00:20:32.881 } 00:20:32.881 ]' 00:20:32.881 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.881 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:32.881 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.881 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:32.881 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.881 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.881 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.881 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.139 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MmYwMmZmMGRlODJhNmNhZWNhYTc5NzFlNWU2NmM5M2Ha7+5C: --dhchap-ctrl-secret DHHC-1:02:ZDA4MjM5MmY4N2MxNDU2YTRlOWQ2ODUzNmVhNTFjNzY2NTZhNTkwNGI0NWE4NWFlSXSZMQ==: 00:20:33.708 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.708 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:33.708 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.708 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.708 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.708 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.708 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:33.708 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:33.967 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:20:33.967 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.967 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:33.967 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:33.967 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:33.967 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.967 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.967 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.967 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.967 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.967 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.967 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.226 00:20:34.226 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.226 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.226 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.226 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.226 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.226 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.226 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.226 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.226 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.226 { 00:20:34.226 "cntlid": 13, 00:20:34.226 "qid": 0, 00:20:34.226 "state": "enabled", 00:20:34.226 "thread": "nvmf_tgt_poll_group_000", 00:20:34.226 "listen_address": { 00:20:34.226 "trtype": "TCP", 00:20:34.226 "adrfam": "IPv4", 00:20:34.226 "traddr": "10.0.0.2", 00:20:34.226 "trsvcid": "4420" 00:20:34.226 }, 00:20:34.226 "peer_address": { 00:20:34.226 "trtype": "TCP", 00:20:34.226 "adrfam": "IPv4", 00:20:34.226 "traddr": "10.0.0.1", 00:20:34.226 "trsvcid": "36864" 00:20:34.226 }, 00:20:34.226 "auth": { 00:20:34.226 "state": "completed", 00:20:34.226 "digest": "sha256", 00:20:34.226 "dhgroup": "ffdhe2048" 00:20:34.226 } 00:20:34.226 } 00:20:34.226 ]' 00:20:34.226 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.485 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:34.485 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.485 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:34.485 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.485 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.485 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.485 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.743 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTA2YjAxOTE3YzkxMzY0NTQ0ZGEzNTczMDNhNDQ5MTA0ZmMzMWUxOTA2NTA5ZWEyc3lZCg==: --dhchap-ctrl-secret DHHC-1:01:OWM4YWNkMTgxNmZiMGY4OTIwZDNkYWRlMmU3YzZmOTLzx0Fz: 00:20:35.001 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.260 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:35.260 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.260 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.260 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.260 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.260 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:35.260 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:35.260 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:20:35.260 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.260 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:35.260 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:35.260 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:35.260 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.260 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:20:35.260 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.260 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.260 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.260 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:35.261 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:35.520 00:20:35.520 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:35.520 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.520 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.778 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.778 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.778 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.778 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.778 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.778 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.778 { 00:20:35.778 "cntlid": 15, 00:20:35.778 "qid": 0, 00:20:35.778 "state": "enabled", 00:20:35.778 "thread": "nvmf_tgt_poll_group_000", 00:20:35.778 "listen_address": { 00:20:35.778 "trtype": "TCP", 00:20:35.778 "adrfam": "IPv4", 00:20:35.778 "traddr": "10.0.0.2", 00:20:35.778 "trsvcid": "4420" 00:20:35.778 }, 00:20:35.779 "peer_address": { 00:20:35.779 "trtype": "TCP", 00:20:35.779 "adrfam": "IPv4", 00:20:35.779 "traddr": "10.0.0.1", 00:20:35.779 "trsvcid": "36896" 00:20:35.779 }, 00:20:35.779 "auth": { 00:20:35.779 "state": "completed", 00:20:35.779 "digest": "sha256", 00:20:35.779 "dhgroup": "ffdhe2048" 00:20:35.779 } 00:20:35.779 } 00:20:35.779 ]' 00:20:35.779 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.779 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:35.779 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.779 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:35.779 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.779 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.779 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.779 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.038 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzA4M2ZlOWRmNWMxODNiZTIxNDQ4MGMxYWQ3OThmY2U5OWQ1ZjA2YzNlYzE2Zjc1Zjc4YzY1ZTgzNGM4NTU2M2WfuUc=: 00:20:36.605 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.606 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:36.606 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.606 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.606 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.606 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.606 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.606 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:36.606 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:36.864 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:20:36.864 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.864 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:36.864 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:36.864 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:36.864 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.864 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.864 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.864 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.864 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.864 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.864 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.123 00:20:37.123 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.123 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.123 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.382 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.382 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.382 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.382 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.382 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.382 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.382 { 00:20:37.382 "cntlid": 17, 00:20:37.382 "qid": 0, 00:20:37.382 "state": "enabled", 00:20:37.382 "thread": "nvmf_tgt_poll_group_000", 00:20:37.382 "listen_address": { 00:20:37.382 "trtype": "TCP", 00:20:37.382 "adrfam": "IPv4", 00:20:37.382 "traddr": "10.0.0.2", 00:20:37.382 "trsvcid": "4420" 00:20:37.382 }, 00:20:37.382 "peer_address": { 00:20:37.382 "trtype": "TCP", 00:20:37.382 "adrfam": "IPv4", 00:20:37.382 "traddr": "10.0.0.1", 00:20:37.382 "trsvcid": "36914" 00:20:37.382 }, 00:20:37.382 "auth": { 00:20:37.382 "state": "completed", 00:20:37.382 "digest": "sha256", 00:20:37.382 "dhgroup": "ffdhe3072" 00:20:37.382 } 00:20:37.382 } 00:20:37.382 ]' 00:20:37.382 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.382 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:37.382 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.382 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:37.382 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.382 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.382 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.382 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.641 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MzFkOGE5MzI5NGE0ZjRlZjgxNDYxMjg2YmIxNWQzM2EwNjZjY2ZhNjkwZmE3ZjI0MLzYDQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiZGM5ODNmYWRiMGQxNjg2MGIzZDRjYWM2MjI3NjBmOWU0YmM1ZGM0Yzg4ZTU4OTg0YTEwYWM2NmVlZTQ1M87WMIU=: 00:20:38.210 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.210 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:38.210 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.210 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.210 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.210 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.210 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:38.210 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:38.210 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:20:38.210 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.210 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:38.210 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:38.210 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:38.210 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.210 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.210 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.210 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.469 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.469 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.469 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.469 00:20:38.469 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.469 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.469 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.727 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.728 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.728 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.728 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.728 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.728 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.728 { 00:20:38.728 "cntlid": 19, 00:20:38.728 "qid": 0, 00:20:38.728 "state": "enabled", 00:20:38.728 "thread": "nvmf_tgt_poll_group_000", 00:20:38.728 "listen_address": { 00:20:38.728 "trtype": "TCP", 00:20:38.728 "adrfam": "IPv4", 00:20:38.728 "traddr": "10.0.0.2", 00:20:38.728 "trsvcid": "4420" 00:20:38.728 }, 00:20:38.728 "peer_address": { 00:20:38.728 "trtype": "TCP", 00:20:38.728 "adrfam": "IPv4", 00:20:38.728 "traddr": "10.0.0.1", 00:20:38.728 "trsvcid": "36934" 00:20:38.728 }, 00:20:38.728 "auth": { 00:20:38.728 "state": "completed", 00:20:38.728 "digest": "sha256", 00:20:38.728 "dhgroup": "ffdhe3072" 00:20:38.728 } 00:20:38.728 } 00:20:38.728 ]' 00:20:38.728 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.728 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:38.728 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.986 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:38.986 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.986 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.986 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.986 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.986 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MmYwMmZmMGRlODJhNmNhZWNhYTc5NzFlNWU2NmM5M2Ha7+5C: --dhchap-ctrl-secret DHHC-1:02:ZDA4MjM5MmY4N2MxNDU2YTRlOWQ2ODUzNmVhNTFjNzY2NTZhNTkwNGI0NWE4NWFlSXSZMQ==: 00:20:39.554 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.554 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:39.554 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.554 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.554 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.554 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:39.554 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:39.554 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:39.813 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:20:39.813 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:39.813 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:39.813 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:39.813 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:39.813 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.813 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.813 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.813 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.813 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.813 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.813 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.072 00:20:40.072 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.072 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.072 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.331 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.331 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.331 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.331 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.331 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.331 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.331 { 00:20:40.331 "cntlid": 21, 00:20:40.331 "qid": 0, 00:20:40.331 "state": "enabled", 00:20:40.331 "thread": "nvmf_tgt_poll_group_000", 00:20:40.331 "listen_address": { 00:20:40.331 "trtype": "TCP", 00:20:40.331 "adrfam": "IPv4", 00:20:40.331 "traddr": "10.0.0.2", 00:20:40.331 "trsvcid": "4420" 00:20:40.331 }, 00:20:40.331 "peer_address": { 00:20:40.331 "trtype": "TCP", 00:20:40.331 "adrfam": "IPv4", 00:20:40.331 "traddr": "10.0.0.1", 00:20:40.331 "trsvcid": "37810" 00:20:40.331 }, 00:20:40.331 "auth": { 00:20:40.331 "state": "completed", 00:20:40.331 "digest": "sha256", 00:20:40.331 "dhgroup": "ffdhe3072" 00:20:40.331 } 00:20:40.331 } 00:20:40.331 ]' 00:20:40.331 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.331 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:40.331 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.331 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:40.331 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.331 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.331 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.331 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.590 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTA2YjAxOTE3YzkxMzY0NTQ0ZGEzNTczMDNhNDQ5MTA0ZmMzMWUxOTA2NTA5ZWEyc3lZCg==: --dhchap-ctrl-secret DHHC-1:01:OWM4YWNkMTgxNmZiMGY4OTIwZDNkYWRlMmU3YzZmOTLzx0Fz: 00:20:41.157 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.157 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:41.157 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.157 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.157 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.157 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.157 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:41.157 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:41.157 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:20:41.157 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.157 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:41.157 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:41.157 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:41.157 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.157 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:20:41.157 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.157 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.157 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.157 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:41.157 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:41.414 00:20:41.414 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.414 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:41.414 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.672 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.672 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.672 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.672 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.672 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.672 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.672 { 00:20:41.672 "cntlid": 23, 00:20:41.672 "qid": 0, 00:20:41.672 "state": "enabled", 00:20:41.672 "thread": "nvmf_tgt_poll_group_000", 00:20:41.672 "listen_address": { 00:20:41.672 "trtype": "TCP", 00:20:41.672 "adrfam": "IPv4", 00:20:41.672 "traddr": "10.0.0.2", 00:20:41.672 "trsvcid": "4420" 00:20:41.672 }, 00:20:41.672 "peer_address": { 00:20:41.672 "trtype": "TCP", 00:20:41.672 "adrfam": "IPv4", 00:20:41.672 "traddr": "10.0.0.1", 00:20:41.672 "trsvcid": "37830" 00:20:41.672 }, 00:20:41.672 "auth": { 00:20:41.672 "state": "completed", 00:20:41.672 "digest": "sha256", 00:20:41.672 "dhgroup": "ffdhe3072" 00:20:41.672 } 00:20:41.672 } 00:20:41.672 ]' 00:20:41.672 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.672 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:41.672 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.930 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:41.930 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.930 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.930 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.930 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.930 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzA4M2ZlOWRmNWMxODNiZTIxNDQ4MGMxYWQ3OThmY2U5OWQ1ZjA2YzNlYzE2Zjc1Zjc4YzY1ZTgzNGM4NTU2M2WfuUc=: 00:20:42.497 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.497 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:42.497 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.497 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.497 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.497 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:42.497 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.497 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:42.497 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:42.756 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:20:42.756 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:42.756 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:42.756 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:42.756 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:42.756 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.756 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.757 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.757 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.757 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.757 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.757 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.015 00:20:43.015 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.015 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.015 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.274 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.274 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.274 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.274 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.274 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.274 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.274 { 00:20:43.274 "cntlid": 25, 00:20:43.274 "qid": 0, 00:20:43.274 "state": "enabled", 00:20:43.274 "thread": "nvmf_tgt_poll_group_000", 00:20:43.274 "listen_address": { 00:20:43.274 "trtype": "TCP", 00:20:43.274 "adrfam": "IPv4", 00:20:43.274 "traddr": "10.0.0.2", 00:20:43.274 "trsvcid": "4420" 00:20:43.274 }, 00:20:43.274 "peer_address": { 00:20:43.274 "trtype": "TCP", 00:20:43.274 "adrfam": "IPv4", 00:20:43.274 "traddr": "10.0.0.1", 00:20:43.274 "trsvcid": "37848" 00:20:43.274 }, 00:20:43.274 "auth": { 00:20:43.274 "state": "completed", 00:20:43.274 "digest": "sha256", 00:20:43.274 "dhgroup": "ffdhe4096" 00:20:43.274 } 00:20:43.274 } 00:20:43.274 ]' 00:20:43.274 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.274 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:43.274 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.274 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:43.274 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.274 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.274 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.274 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.533 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MzFkOGE5MzI5NGE0ZjRlZjgxNDYxMjg2YmIxNWQzM2EwNjZjY2ZhNjkwZmE3ZjI0MLzYDQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiZGM5ODNmYWRiMGQxNjg2MGIzZDRjYWM2MjI3NjBmOWU0YmM1ZGM0Yzg4ZTU4OTg0YTEwYWM2NmVlZTQ1M87WMIU=: 00:20:44.101 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.101 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:44.101 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.102 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.102 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.102 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.102 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:44.102 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:44.361 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:20:44.361 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.361 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:44.361 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:44.361 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:44.361 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.361 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.361 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.361 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.361 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.361 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.361 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.620 00:20:44.620 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.620 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.620 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.620 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.620 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.620 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.620 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.620 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.620 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.620 { 00:20:44.620 "cntlid": 27, 00:20:44.620 "qid": 0, 00:20:44.620 "state": "enabled", 00:20:44.620 "thread": "nvmf_tgt_poll_group_000", 00:20:44.620 "listen_address": { 00:20:44.620 "trtype": "TCP", 00:20:44.620 "adrfam": "IPv4", 00:20:44.620 "traddr": "10.0.0.2", 00:20:44.620 "trsvcid": "4420" 00:20:44.620 }, 00:20:44.620 "peer_address": { 00:20:44.620 "trtype": "TCP", 00:20:44.620 "adrfam": "IPv4", 00:20:44.620 "traddr": "10.0.0.1", 00:20:44.620 "trsvcid": "37888" 00:20:44.620 }, 00:20:44.620 "auth": { 00:20:44.620 "state": "completed", 00:20:44.620 "digest": "sha256", 00:20:44.620 "dhgroup": "ffdhe4096" 00:20:44.620 } 00:20:44.620 } 00:20:44.620 ]' 00:20:44.620 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.921 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:44.921 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.921 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:44.921 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.921 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.921 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.921 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.209 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MmYwMmZmMGRlODJhNmNhZWNhYTc5NzFlNWU2NmM5M2Ha7+5C: --dhchap-ctrl-secret DHHC-1:02:ZDA4MjM5MmY4N2MxNDU2YTRlOWQ2ODUzNmVhNTFjNzY2NTZhNTkwNGI0NWE4NWFlSXSZMQ==: 00:20:45.468 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.468 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:45.468 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.468 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.468 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.727 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.727 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:45.727 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:45.727 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:20:45.727 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.727 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:45.727 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:45.727 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:45.727 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.727 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.727 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.727 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.727 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.727 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.727 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.987 00:20:45.987 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.987 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.987 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.246 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.246 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.246 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.246 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.246 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.246 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.246 { 00:20:46.246 "cntlid": 29, 00:20:46.246 "qid": 0, 00:20:46.246 "state": "enabled", 00:20:46.246 "thread": "nvmf_tgt_poll_group_000", 00:20:46.246 "listen_address": { 00:20:46.246 "trtype": "TCP", 00:20:46.246 "adrfam": "IPv4", 00:20:46.246 "traddr": "10.0.0.2", 00:20:46.246 "trsvcid": "4420" 00:20:46.246 }, 00:20:46.246 "peer_address": { 00:20:46.246 "trtype": "TCP", 00:20:46.246 "adrfam": "IPv4", 00:20:46.246 "traddr": "10.0.0.1", 00:20:46.246 "trsvcid": "37930" 00:20:46.246 }, 00:20:46.246 "auth": { 00:20:46.246 "state": "completed", 00:20:46.246 "digest": "sha256", 00:20:46.246 "dhgroup": "ffdhe4096" 00:20:46.246 } 00:20:46.246 } 00:20:46.246 ]' 00:20:46.246 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.246 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:46.246 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.246 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:46.246 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.505 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.505 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.505 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.505 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTA2YjAxOTE3YzkxMzY0NTQ0ZGEzNTczMDNhNDQ5MTA0ZmMzMWUxOTA2NTA5ZWEyc3lZCg==: --dhchap-ctrl-secret DHHC-1:01:OWM4YWNkMTgxNmZiMGY4OTIwZDNkYWRlMmU3YzZmOTLzx0Fz: 00:20:47.072 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.072 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:47.072 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.072 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.072 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.072 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.072 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:47.072 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:47.332 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:20:47.332 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.332 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:47.332 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:47.332 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:47.332 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.332 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:20:47.332 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.332 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.332 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.332 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:47.332 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:47.591 00:20:47.591 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.591 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.591 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.850 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.850 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.851 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.851 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.851 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.851 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.851 { 00:20:47.851 "cntlid": 31, 00:20:47.851 "qid": 0, 00:20:47.851 "state": "enabled", 00:20:47.851 "thread": "nvmf_tgt_poll_group_000", 00:20:47.851 "listen_address": { 00:20:47.851 "trtype": "TCP", 00:20:47.851 "adrfam": "IPv4", 00:20:47.851 "traddr": "10.0.0.2", 00:20:47.851 "trsvcid": "4420" 00:20:47.851 }, 00:20:47.851 "peer_address": { 00:20:47.851 "trtype": "TCP", 00:20:47.851 "adrfam": "IPv4", 00:20:47.851 "traddr": "10.0.0.1", 00:20:47.851 "trsvcid": "37956" 00:20:47.851 }, 00:20:47.851 "auth": { 00:20:47.851 "state": "completed", 00:20:47.851 "digest": "sha256", 00:20:47.851 "dhgroup": "ffdhe4096" 00:20:47.851 } 00:20:47.851 } 00:20:47.851 ]' 00:20:47.851 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.851 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:47.851 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.851 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:47.851 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.851 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.851 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.851 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.110 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzA4M2ZlOWRmNWMxODNiZTIxNDQ4MGMxYWQ3OThmY2U5OWQ1ZjA2YzNlYzE2Zjc1Zjc4YzY1ZTgzNGM4NTU2M2WfuUc=: 00:20:48.679 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.679 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:48.679 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.679 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.679 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.679 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:48.679 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:48.679 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:48.679 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:48.679 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:20:48.679 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.679 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:48.679 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:48.679 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:48.679 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.679 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.679 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.679 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.940 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.940 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.940 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.200 00:20:49.200 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.200 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.200 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.200 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.200 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.200 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.200 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.460 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.460 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.460 { 00:20:49.460 "cntlid": 33, 00:20:49.460 "qid": 0, 00:20:49.460 "state": "enabled", 00:20:49.460 "thread": "nvmf_tgt_poll_group_000", 00:20:49.460 "listen_address": { 00:20:49.460 "trtype": "TCP", 00:20:49.460 "adrfam": "IPv4", 00:20:49.460 "traddr": "10.0.0.2", 00:20:49.460 "trsvcid": "4420" 00:20:49.460 }, 00:20:49.460 "peer_address": { 00:20:49.460 "trtype": "TCP", 00:20:49.460 "adrfam": "IPv4", 00:20:49.460 "traddr": "10.0.0.1", 00:20:49.460 "trsvcid": "37992" 00:20:49.460 }, 00:20:49.460 "auth": { 00:20:49.460 "state": "completed", 00:20:49.460 "digest": "sha256", 00:20:49.460 "dhgroup": "ffdhe6144" 00:20:49.460 } 00:20:49.460 } 00:20:49.460 ]' 00:20:49.460 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.460 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:49.460 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.460 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:49.460 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.460 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.460 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.460 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.719 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MzFkOGE5MzI5NGE0ZjRlZjgxNDYxMjg2YmIxNWQzM2EwNjZjY2ZhNjkwZmE3ZjI0MLzYDQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiZGM5ODNmYWRiMGQxNjg2MGIzZDRjYWM2MjI3NjBmOWU0YmM1ZGM0Yzg4ZTU4OTg0YTEwYWM2NmVlZTQ1M87WMIU=: 00:20:50.288 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.288 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:50.288 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.288 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.288 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.288 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.288 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:50.288 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:50.288 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:20:50.288 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.288 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:50.288 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:50.288 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:50.288 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.288 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.288 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.288 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.288 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.288 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.288 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.857 00:20:50.857 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.857 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.857 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.857 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.857 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.857 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.857 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.857 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.857 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.857 { 00:20:50.857 "cntlid": 35, 00:20:50.857 "qid": 0, 00:20:50.857 "state": "enabled", 00:20:50.857 "thread": "nvmf_tgt_poll_group_000", 00:20:50.857 "listen_address": { 00:20:50.857 "trtype": "TCP", 00:20:50.857 "adrfam": "IPv4", 00:20:50.857 "traddr": "10.0.0.2", 00:20:50.857 "trsvcid": "4420" 00:20:50.857 }, 00:20:50.857 "peer_address": { 00:20:50.857 "trtype": "TCP", 00:20:50.857 "adrfam": "IPv4", 00:20:50.857 "traddr": "10.0.0.1", 00:20:50.857 "trsvcid": "47930" 00:20:50.857 }, 00:20:50.857 "auth": { 00:20:50.857 "state": "completed", 00:20:50.857 "digest": "sha256", 00:20:50.857 "dhgroup": "ffdhe6144" 00:20:50.857 } 00:20:50.857 } 00:20:50.857 ]' 00:20:50.857 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.857 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:50.857 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.857 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:50.857 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.117 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.117 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.117 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.117 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MmYwMmZmMGRlODJhNmNhZWNhYTc5NzFlNWU2NmM5M2Ha7+5C: --dhchap-ctrl-secret DHHC-1:02:ZDA4MjM5MmY4N2MxNDU2YTRlOWQ2ODUzNmVhNTFjNzY2NTZhNTkwNGI0NWE4NWFlSXSZMQ==: 00:20:51.685 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.685 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:51.685 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.685 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.685 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.685 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:51.685 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:51.685 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:51.945 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:20:51.945 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.945 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:51.945 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:51.945 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:51.945 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.945 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.945 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.945 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.945 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.945 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.945 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.204 00:20:52.204 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:52.204 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.204 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.463 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.463 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.463 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.463 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.463 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.463 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.463 { 00:20:52.463 "cntlid": 37, 00:20:52.463 "qid": 0, 00:20:52.463 "state": "enabled", 00:20:52.463 "thread": "nvmf_tgt_poll_group_000", 00:20:52.463 "listen_address": { 00:20:52.463 "trtype": "TCP", 00:20:52.463 "adrfam": "IPv4", 00:20:52.463 "traddr": "10.0.0.2", 00:20:52.463 "trsvcid": "4420" 00:20:52.463 }, 00:20:52.463 "peer_address": { 00:20:52.463 "trtype": "TCP", 00:20:52.463 "adrfam": "IPv4", 00:20:52.463 "traddr": "10.0.0.1", 00:20:52.463 "trsvcid": "47962" 00:20:52.463 }, 00:20:52.463 "auth": { 00:20:52.463 "state": "completed", 00:20:52.463 "digest": "sha256", 00:20:52.463 "dhgroup": "ffdhe6144" 00:20:52.463 } 00:20:52.463 } 00:20:52.463 ]' 00:20:52.463 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.463 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:52.463 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.463 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:52.463 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.722 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.722 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.723 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.723 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTA2YjAxOTE3YzkxMzY0NTQ0ZGEzNTczMDNhNDQ5MTA0ZmMzMWUxOTA2NTA5ZWEyc3lZCg==: --dhchap-ctrl-secret DHHC-1:01:OWM4YWNkMTgxNmZiMGY4OTIwZDNkYWRlMmU3YzZmOTLzx0Fz: 00:20:53.291 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.291 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:53.291 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.292 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.292 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.292 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.292 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:53.292 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:53.551 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:20:53.551 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.551 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:53.551 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:53.551 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:53.551 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.551 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:20:53.551 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.551 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.551 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.551 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:53.551 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:53.811 00:20:53.811 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.811 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.811 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.073 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.073 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.073 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.073 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.073 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.073 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.073 { 00:20:54.073 "cntlid": 39, 00:20:54.073 "qid": 0, 00:20:54.073 "state": "enabled", 00:20:54.073 "thread": "nvmf_tgt_poll_group_000", 00:20:54.073 "listen_address": { 00:20:54.073 "trtype": "TCP", 00:20:54.073 "adrfam": "IPv4", 00:20:54.073 "traddr": "10.0.0.2", 00:20:54.073 "trsvcid": "4420" 00:20:54.073 }, 00:20:54.073 "peer_address": { 00:20:54.073 "trtype": "TCP", 00:20:54.073 "adrfam": "IPv4", 00:20:54.073 "traddr": "10.0.0.1", 00:20:54.073 "trsvcid": "47996" 00:20:54.073 }, 00:20:54.073 "auth": { 00:20:54.073 "state": "completed", 00:20:54.073 "digest": "sha256", 00:20:54.073 "dhgroup": "ffdhe6144" 00:20:54.073 } 00:20:54.073 } 00:20:54.073 ]' 00:20:54.073 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.073 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:54.073 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.073 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:54.073 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.333 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.333 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.333 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.334 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzA4M2ZlOWRmNWMxODNiZTIxNDQ4MGMxYWQ3OThmY2U5OWQ1ZjA2YzNlYzE2Zjc1Zjc4YzY1ZTgzNGM4NTU2M2WfuUc=: 00:20:54.901 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.901 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:54.901 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.901 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.901 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.901 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.901 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:54.901 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:54.901 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:55.160 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:20:55.160 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.160 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:55.160 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:55.160 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:55.160 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.160 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.160 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.160 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.160 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.160 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.160 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.729 00:20:55.729 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.729 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.729 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.729 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.729 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.729 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.729 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.729 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.729 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.729 { 00:20:55.729 "cntlid": 41, 00:20:55.729 "qid": 0, 00:20:55.729 "state": "enabled", 00:20:55.729 "thread": "nvmf_tgt_poll_group_000", 00:20:55.729 "listen_address": { 00:20:55.729 "trtype": "TCP", 00:20:55.729 "adrfam": "IPv4", 00:20:55.729 "traddr": "10.0.0.2", 00:20:55.729 "trsvcid": "4420" 00:20:55.729 }, 00:20:55.729 "peer_address": { 00:20:55.729 "trtype": "TCP", 00:20:55.729 "adrfam": "IPv4", 00:20:55.729 "traddr": "10.0.0.1", 00:20:55.729 "trsvcid": "48018" 00:20:55.729 }, 00:20:55.729 "auth": { 00:20:55.729 "state": "completed", 00:20:55.729 "digest": "sha256", 00:20:55.729 "dhgroup": "ffdhe8192" 00:20:55.729 } 00:20:55.729 } 00:20:55.729 ]' 00:20:55.729 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.729 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:55.729 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.988 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:55.988 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.988 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.988 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.988 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.988 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MzFkOGE5MzI5NGE0ZjRlZjgxNDYxMjg2YmIxNWQzM2EwNjZjY2ZhNjkwZmE3ZjI0MLzYDQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiZGM5ODNmYWRiMGQxNjg2MGIzZDRjYWM2MjI3NjBmOWU0YmM1ZGM0Yzg4ZTU4OTg0YTEwYWM2NmVlZTQ1M87WMIU=: 00:20:56.557 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.557 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:56.557 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.557 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.557 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.557 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.557 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:56.557 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:56.816 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:56.816 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.816 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:56.816 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:56.816 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:56.816 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.816 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.816 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.816 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.816 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.816 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.816 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.385 00:20:57.385 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.385 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.385 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.385 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.385 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.385 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.385 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.385 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.385 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.385 { 00:20:57.385 "cntlid": 43, 00:20:57.385 "qid": 0, 00:20:57.385 "state": "enabled", 00:20:57.385 "thread": "nvmf_tgt_poll_group_000", 00:20:57.385 "listen_address": { 00:20:57.385 "trtype": "TCP", 00:20:57.385 "adrfam": "IPv4", 00:20:57.385 "traddr": "10.0.0.2", 00:20:57.385 "trsvcid": "4420" 00:20:57.385 }, 00:20:57.385 "peer_address": { 00:20:57.385 "trtype": "TCP", 00:20:57.385 "adrfam": "IPv4", 00:20:57.385 "traddr": "10.0.0.1", 00:20:57.385 "trsvcid": "48040" 00:20:57.385 }, 00:20:57.385 "auth": { 00:20:57.385 "state": "completed", 00:20:57.385 "digest": "sha256", 00:20:57.385 "dhgroup": "ffdhe8192" 00:20:57.385 } 00:20:57.385 } 00:20:57.385 ]' 00:20:57.386 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.386 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:57.386 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.645 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:57.645 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.645 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.645 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.645 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.645 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MmYwMmZmMGRlODJhNmNhZWNhYTc5NzFlNWU2NmM5M2Ha7+5C: --dhchap-ctrl-secret DHHC-1:02:ZDA4MjM5MmY4N2MxNDU2YTRlOWQ2ODUzNmVhNTFjNzY2NTZhNTkwNGI0NWE4NWFlSXSZMQ==: 00:20:58.214 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.214 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:58.214 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.214 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.214 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.214 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:58.214 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:58.214 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:58.474 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:58.474 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.474 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:58.474 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:58.474 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:58.474 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.474 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.474 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.474 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.474 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.474 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.474 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.078 00:20:59.078 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:59.078 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:59.078 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.078 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.078 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.078 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.078 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.078 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.078 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:59.078 { 00:20:59.078 "cntlid": 45, 00:20:59.078 "qid": 0, 00:20:59.078 "state": "enabled", 00:20:59.078 "thread": "nvmf_tgt_poll_group_000", 00:20:59.078 "listen_address": { 00:20:59.078 "trtype": "TCP", 00:20:59.078 "adrfam": "IPv4", 00:20:59.078 "traddr": "10.0.0.2", 00:20:59.078 "trsvcid": "4420" 00:20:59.078 }, 00:20:59.078 "peer_address": { 00:20:59.078 "trtype": "TCP", 00:20:59.078 "adrfam": "IPv4", 00:20:59.078 "traddr": "10.0.0.1", 00:20:59.078 "trsvcid": "48070" 00:20:59.078 }, 00:20:59.078 "auth": { 00:20:59.078 "state": "completed", 00:20:59.078 "digest": "sha256", 00:20:59.078 "dhgroup": "ffdhe8192" 00:20:59.078 } 00:20:59.078 } 00:20:59.078 ]' 00:20:59.078 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:59.368 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:59.368 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:59.368 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:59.368 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:59.368 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.368 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.368 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.368 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTA2YjAxOTE3YzkxMzY0NTQ0ZGEzNTczMDNhNDQ5MTA0ZmMzMWUxOTA2NTA5ZWEyc3lZCg==: --dhchap-ctrl-secret DHHC-1:01:OWM4YWNkMTgxNmZiMGY4OTIwZDNkYWRlMmU3YzZmOTLzx0Fz: 00:20:59.936 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.936 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:59.936 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.936 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.936 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.936 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.936 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:59.936 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:00.196 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:21:00.196 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:00.196 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:00.196 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:00.196 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:00.196 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.196 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:21:00.196 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.196 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.196 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.196 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.196 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.765 00:21:00.765 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.765 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.765 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.765 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.765 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.765 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.765 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.765 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.765 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.765 { 00:21:00.765 "cntlid": 47, 00:21:00.765 "qid": 0, 00:21:00.765 "state": "enabled", 00:21:00.765 "thread": "nvmf_tgt_poll_group_000", 00:21:00.765 "listen_address": { 00:21:00.765 "trtype": "TCP", 00:21:00.765 "adrfam": "IPv4", 00:21:00.765 "traddr": "10.0.0.2", 00:21:00.765 "trsvcid": "4420" 00:21:00.765 }, 00:21:00.765 "peer_address": { 00:21:00.765 "trtype": "TCP", 00:21:00.765 "adrfam": "IPv4", 00:21:00.765 "traddr": "10.0.0.1", 00:21:00.765 "trsvcid": "55812" 00:21:00.765 }, 00:21:00.765 "auth": { 00:21:00.765 "state": "completed", 00:21:00.765 "digest": "sha256", 00:21:00.765 "dhgroup": "ffdhe8192" 00:21:00.765 } 00:21:00.765 } 00:21:00.765 ]' 00:21:00.765 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.765 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:00.765 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:01.024 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:01.024 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:01.024 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.024 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.024 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.024 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzA4M2ZlOWRmNWMxODNiZTIxNDQ4MGMxYWQ3OThmY2U5OWQ1ZjA2YzNlYzE2Zjc1Zjc4YzY1ZTgzNGM4NTU2M2WfuUc=: 00:21:01.609 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.609 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:01.609 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.609 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.609 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.609 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:01.609 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.609 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.609 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:01.609 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:01.870 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:21:01.870 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.870 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:01.870 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:01.870 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:01.870 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.870 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.870 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.870 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.870 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.870 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.870 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.129 00:21:02.129 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.129 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.129 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.389 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.389 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.389 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.389 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.389 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.389 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.389 { 00:21:02.389 "cntlid": 49, 00:21:02.389 "qid": 0, 00:21:02.389 "state": "enabled", 00:21:02.389 "thread": "nvmf_tgt_poll_group_000", 00:21:02.389 "listen_address": { 00:21:02.389 "trtype": "TCP", 00:21:02.389 "adrfam": "IPv4", 00:21:02.389 "traddr": "10.0.0.2", 00:21:02.389 "trsvcid": "4420" 00:21:02.389 }, 00:21:02.389 "peer_address": { 00:21:02.389 "trtype": "TCP", 00:21:02.389 "adrfam": "IPv4", 00:21:02.389 "traddr": "10.0.0.1", 00:21:02.389 "trsvcid": "55836" 00:21:02.389 }, 00:21:02.389 "auth": { 00:21:02.389 "state": "completed", 00:21:02.389 "digest": "sha384", 00:21:02.389 "dhgroup": "null" 00:21:02.389 } 00:21:02.389 } 00:21:02.389 ]' 00:21:02.389 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.389 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.389 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.389 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:02.389 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.389 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.389 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.389 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.648 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MzFkOGE5MzI5NGE0ZjRlZjgxNDYxMjg2YmIxNWQzM2EwNjZjY2ZhNjkwZmE3ZjI0MLzYDQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiZGM5ODNmYWRiMGQxNjg2MGIzZDRjYWM2MjI3NjBmOWU0YmM1ZGM0Yzg4ZTU4OTg0YTEwYWM2NmVlZTQ1M87WMIU=: 00:21:03.216 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.216 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:03.216 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.216 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.216 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.216 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:03.216 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:03.216 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:03.476 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:21:03.476 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.476 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:03.476 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:03.476 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:03.476 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.476 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.476 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.476 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.476 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.476 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.476 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.476 00:21:03.735 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:03.735 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:03.735 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.735 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.735 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.735 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.735 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.735 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.735 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:03.735 { 00:21:03.735 "cntlid": 51, 00:21:03.735 "qid": 0, 00:21:03.735 "state": "enabled", 00:21:03.735 "thread": "nvmf_tgt_poll_group_000", 00:21:03.735 "listen_address": { 00:21:03.735 "trtype": "TCP", 00:21:03.735 "adrfam": "IPv4", 00:21:03.735 "traddr": "10.0.0.2", 00:21:03.735 "trsvcid": "4420" 00:21:03.735 }, 00:21:03.735 "peer_address": { 00:21:03.735 "trtype": "TCP", 00:21:03.735 "adrfam": "IPv4", 00:21:03.735 "traddr": "10.0.0.1", 00:21:03.735 "trsvcid": "55858" 00:21:03.735 }, 00:21:03.735 "auth": { 00:21:03.735 "state": "completed", 00:21:03.735 "digest": "sha384", 00:21:03.735 "dhgroup": "null" 00:21:03.735 } 00:21:03.735 } 00:21:03.735 ]' 00:21:03.735 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.735 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.735 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.994 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:03.994 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.994 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.994 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.994 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.254 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MmYwMmZmMGRlODJhNmNhZWNhYTc5NzFlNWU2NmM5M2Ha7+5C: --dhchap-ctrl-secret DHHC-1:02:ZDA4MjM5MmY4N2MxNDU2YTRlOWQ2ODUzNmVhNTFjNzY2NTZhNTkwNGI0NWE4NWFlSXSZMQ==: 00:21:04.823 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.823 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:04.823 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.823 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.823 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.823 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.823 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:04.823 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:04.823 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:21:04.823 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.823 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:04.823 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:04.823 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:04.823 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.823 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.823 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.823 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.823 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.823 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.823 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.082 00:21:05.082 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:05.082 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:05.082 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.342 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.342 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.342 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.342 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.342 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.342 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:05.342 { 00:21:05.342 "cntlid": 53, 00:21:05.342 "qid": 0, 00:21:05.342 "state": "enabled", 00:21:05.342 "thread": "nvmf_tgt_poll_group_000", 00:21:05.342 "listen_address": { 00:21:05.342 "trtype": "TCP", 00:21:05.342 "adrfam": "IPv4", 00:21:05.342 "traddr": "10.0.0.2", 00:21:05.342 "trsvcid": "4420" 00:21:05.342 }, 00:21:05.342 "peer_address": { 00:21:05.342 "trtype": "TCP", 00:21:05.342 "adrfam": "IPv4", 00:21:05.342 "traddr": "10.0.0.1", 00:21:05.342 "trsvcid": "55878" 00:21:05.342 }, 00:21:05.342 "auth": { 00:21:05.342 "state": "completed", 00:21:05.342 "digest": "sha384", 00:21:05.342 "dhgroup": "null" 00:21:05.342 } 00:21:05.342 } 00:21:05.342 ]' 00:21:05.342 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.342 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.342 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.342 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:05.342 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.342 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.342 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.342 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.601 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTA2YjAxOTE3YzkxMzY0NTQ0ZGEzNTczMDNhNDQ5MTA0ZmMzMWUxOTA2NTA5ZWEyc3lZCg==: --dhchap-ctrl-secret DHHC-1:01:OWM4YWNkMTgxNmZiMGY4OTIwZDNkYWRlMmU3YzZmOTLzx0Fz: 00:21:06.170 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.170 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:06.170 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.170 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.170 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.170 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.170 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:06.170 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:06.429 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:21:06.429 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.429 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:06.429 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:06.429 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:06.429 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.429 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:21:06.429 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.429 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.429 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.429 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:06.429 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:06.688 00:21:06.688 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.688 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.688 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.688 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.688 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.688 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.688 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.688 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.688 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.688 { 00:21:06.688 "cntlid": 55, 00:21:06.688 "qid": 0, 00:21:06.688 "state": "enabled", 00:21:06.688 "thread": "nvmf_tgt_poll_group_000", 00:21:06.688 "listen_address": { 00:21:06.688 "trtype": "TCP", 00:21:06.688 "adrfam": "IPv4", 00:21:06.688 "traddr": "10.0.0.2", 00:21:06.688 "trsvcid": "4420" 00:21:06.688 }, 00:21:06.688 "peer_address": { 00:21:06.688 "trtype": "TCP", 00:21:06.688 "adrfam": "IPv4", 00:21:06.688 "traddr": "10.0.0.1", 00:21:06.688 "trsvcid": "55908" 00:21:06.688 }, 00:21:06.688 "auth": { 00:21:06.688 "state": "completed", 00:21:06.688 "digest": "sha384", 00:21:06.688 "dhgroup": "null" 00:21:06.688 } 00:21:06.688 } 00:21:06.688 ]' 00:21:06.688 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:06.948 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.948 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:06.948 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:06.948 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:06.948 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.948 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.948 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.207 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzA4M2ZlOWRmNWMxODNiZTIxNDQ4MGMxYWQ3OThmY2U5OWQ1ZjA2YzNlYzE2Zjc1Zjc4YzY1ZTgzNGM4NTU2M2WfuUc=: 00:21:07.776 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.776 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:07.776 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.776 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.776 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.776 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:07.776 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:07.776 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:07.776 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:07.776 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:21:07.776 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:07.776 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:07.776 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:07.776 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:07.776 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.776 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.776 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.776 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.776 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.776 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.776 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.035 00:21:08.035 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:08.035 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.035 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.294 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.294 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.294 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.294 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.294 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.294 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:08.294 { 00:21:08.294 "cntlid": 57, 00:21:08.294 "qid": 0, 00:21:08.294 "state": "enabled", 00:21:08.294 "thread": "nvmf_tgt_poll_group_000", 00:21:08.294 "listen_address": { 00:21:08.294 "trtype": "TCP", 00:21:08.294 "adrfam": "IPv4", 00:21:08.294 "traddr": "10.0.0.2", 00:21:08.294 "trsvcid": "4420" 00:21:08.294 }, 00:21:08.294 "peer_address": { 00:21:08.294 "trtype": "TCP", 00:21:08.294 "adrfam": "IPv4", 00:21:08.294 "traddr": "10.0.0.1", 00:21:08.294 "trsvcid": "55948" 00:21:08.294 }, 00:21:08.294 "auth": { 00:21:08.294 "state": "completed", 00:21:08.294 "digest": "sha384", 00:21:08.294 "dhgroup": "ffdhe2048" 00:21:08.294 } 00:21:08.294 } 00:21:08.294 ]' 00:21:08.294 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:08.294 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.294 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:08.294 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:08.294 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:08.294 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.294 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.294 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.553 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MzFkOGE5MzI5NGE0ZjRlZjgxNDYxMjg2YmIxNWQzM2EwNjZjY2ZhNjkwZmE3ZjI0MLzYDQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiZGM5ODNmYWRiMGQxNjg2MGIzZDRjYWM2MjI3NjBmOWU0YmM1ZGM0Yzg4ZTU4OTg0YTEwYWM2NmVlZTQ1M87WMIU=: 00:21:09.123 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.123 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:09.123 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.123 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.123 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.123 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:09.123 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:09.123 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:09.383 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:21:09.383 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:09.383 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:09.383 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:09.383 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:09.383 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.383 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.383 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.383 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.383 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.383 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.383 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.643 00:21:09.643 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.643 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.643 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.643 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.643 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.643 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.643 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.643 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.643 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.643 { 00:21:09.643 "cntlid": 59, 00:21:09.643 "qid": 0, 00:21:09.643 "state": "enabled", 00:21:09.643 "thread": "nvmf_tgt_poll_group_000", 00:21:09.643 "listen_address": { 00:21:09.643 "trtype": "TCP", 00:21:09.643 "adrfam": "IPv4", 00:21:09.643 "traddr": "10.0.0.2", 00:21:09.643 "trsvcid": "4420" 00:21:09.643 }, 00:21:09.643 "peer_address": { 00:21:09.643 "trtype": "TCP", 00:21:09.643 "adrfam": "IPv4", 00:21:09.643 "traddr": "10.0.0.1", 00:21:09.643 "trsvcid": "44340" 00:21:09.643 }, 00:21:09.643 "auth": { 00:21:09.643 "state": "completed", 00:21:09.643 "digest": "sha384", 00:21:09.643 "dhgroup": "ffdhe2048" 00:21:09.643 } 00:21:09.643 } 00:21:09.643 ]' 00:21:09.643 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.902 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.902 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.902 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:09.902 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.902 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.902 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.902 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.162 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MmYwMmZmMGRlODJhNmNhZWNhYTc5NzFlNWU2NmM5M2Ha7+5C: --dhchap-ctrl-secret DHHC-1:02:ZDA4MjM5MmY4N2MxNDU2YTRlOWQ2ODUzNmVhNTFjNzY2NTZhNTkwNGI0NWE4NWFlSXSZMQ==: 00:21:10.730 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.730 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:10.730 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.730 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.730 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.730 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:10.730 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:10.730 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:10.730 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:21:10.730 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:10.730 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:10.730 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:10.730 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:10.730 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.730 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.731 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.731 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.731 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.731 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.731 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.989 00:21:10.989 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:10.989 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:10.989 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.249 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.249 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.249 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.249 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.249 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.249 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.249 { 00:21:11.249 "cntlid": 61, 00:21:11.249 "qid": 0, 00:21:11.249 "state": "enabled", 00:21:11.249 "thread": "nvmf_tgt_poll_group_000", 00:21:11.249 "listen_address": { 00:21:11.249 "trtype": "TCP", 00:21:11.249 "adrfam": "IPv4", 00:21:11.249 "traddr": "10.0.0.2", 00:21:11.249 "trsvcid": "4420" 00:21:11.249 }, 00:21:11.249 "peer_address": { 00:21:11.249 "trtype": "TCP", 00:21:11.249 "adrfam": "IPv4", 00:21:11.249 "traddr": "10.0.0.1", 00:21:11.249 "trsvcid": "44358" 00:21:11.249 }, 00:21:11.249 "auth": { 00:21:11.249 "state": "completed", 00:21:11.249 "digest": "sha384", 00:21:11.249 "dhgroup": "ffdhe2048" 00:21:11.249 } 00:21:11.249 } 00:21:11.249 ]' 00:21:11.249 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.249 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.249 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.249 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:11.249 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.249 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.249 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.249 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.508 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTA2YjAxOTE3YzkxMzY0NTQ0ZGEzNTczMDNhNDQ5MTA0ZmMzMWUxOTA2NTA5ZWEyc3lZCg==: --dhchap-ctrl-secret DHHC-1:01:OWM4YWNkMTgxNmZiMGY4OTIwZDNkYWRlMmU3YzZmOTLzx0Fz: 00:21:12.077 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.077 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:12.077 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.077 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.077 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.077 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:12.077 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:12.077 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:12.077 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:21:12.077 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:12.077 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:12.077 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:12.077 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:12.077 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.077 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:21:12.077 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.077 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.336 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.336 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:12.336 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:12.336 00:21:12.336 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:12.336 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.336 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:12.595 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.595 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.595 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.595 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.595 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.595 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:12.595 { 00:21:12.595 "cntlid": 63, 00:21:12.595 "qid": 0, 00:21:12.595 "state": "enabled", 00:21:12.595 "thread": "nvmf_tgt_poll_group_000", 00:21:12.595 "listen_address": { 00:21:12.595 "trtype": "TCP", 00:21:12.595 "adrfam": "IPv4", 00:21:12.595 "traddr": "10.0.0.2", 00:21:12.595 "trsvcid": "4420" 00:21:12.595 }, 00:21:12.595 "peer_address": { 00:21:12.595 "trtype": "TCP", 00:21:12.595 "adrfam": "IPv4", 00:21:12.595 "traddr": "10.0.0.1", 00:21:12.595 "trsvcid": "44380" 00:21:12.595 }, 00:21:12.595 "auth": { 00:21:12.595 "state": "completed", 00:21:12.595 "digest": "sha384", 00:21:12.595 "dhgroup": "ffdhe2048" 00:21:12.595 } 00:21:12.595 } 00:21:12.595 ]' 00:21:12.595 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:12.595 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:12.595 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:12.595 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:12.595 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:12.855 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.855 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.855 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.855 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzA4M2ZlOWRmNWMxODNiZTIxNDQ4MGMxYWQ3OThmY2U5OWQ1ZjA2YzNlYzE2Zjc1Zjc4YzY1ZTgzNGM4NTU2M2WfuUc=: 00:21:13.424 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.424 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:13.424 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.424 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.424 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.424 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:13.424 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.424 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:13.424 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:13.749 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:21:13.749 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.749 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:13.749 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:13.749 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:13.749 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.749 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.749 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.749 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.749 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.749 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.749 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.009 00:21:14.009 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:14.009 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:14.009 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.009 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.009 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.009 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.009 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.009 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.009 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:14.009 { 00:21:14.009 "cntlid": 65, 00:21:14.009 "qid": 0, 00:21:14.009 "state": "enabled", 00:21:14.009 "thread": "nvmf_tgt_poll_group_000", 00:21:14.009 "listen_address": { 00:21:14.009 "trtype": "TCP", 00:21:14.009 "adrfam": "IPv4", 00:21:14.009 "traddr": "10.0.0.2", 00:21:14.009 "trsvcid": "4420" 00:21:14.009 }, 00:21:14.009 "peer_address": { 00:21:14.009 "trtype": "TCP", 00:21:14.009 "adrfam": "IPv4", 00:21:14.009 "traddr": "10.0.0.1", 00:21:14.009 "trsvcid": "44388" 00:21:14.009 }, 00:21:14.009 "auth": { 00:21:14.009 "state": "completed", 00:21:14.009 "digest": "sha384", 00:21:14.009 "dhgroup": "ffdhe3072" 00:21:14.009 } 00:21:14.009 } 00:21:14.009 ]' 00:21:14.009 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:14.268 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:14.268 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:14.268 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:14.268 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:14.268 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.268 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.268 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.528 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MzFkOGE5MzI5NGE0ZjRlZjgxNDYxMjg2YmIxNWQzM2EwNjZjY2ZhNjkwZmE3ZjI0MLzYDQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiZGM5ODNmYWRiMGQxNjg2MGIzZDRjYWM2MjI3NjBmOWU0YmM1ZGM0Yzg4ZTU4OTg0YTEwYWM2NmVlZTQ1M87WMIU=: 00:21:15.097 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.097 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:15.097 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.097 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.097 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.097 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.097 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:15.097 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:15.097 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:21:15.097 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.097 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:15.097 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:15.097 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:15.097 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.097 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.097 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.097 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.097 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.097 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.097 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.357 00:21:15.357 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.357 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.357 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.616 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.616 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.616 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.616 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.616 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.616 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.616 { 00:21:15.616 "cntlid": 67, 00:21:15.616 "qid": 0, 00:21:15.616 "state": "enabled", 00:21:15.616 "thread": "nvmf_tgt_poll_group_000", 00:21:15.616 "listen_address": { 00:21:15.616 "trtype": "TCP", 00:21:15.616 "adrfam": "IPv4", 00:21:15.616 "traddr": "10.0.0.2", 00:21:15.616 "trsvcid": "4420" 00:21:15.616 }, 00:21:15.616 "peer_address": { 00:21:15.616 "trtype": "TCP", 00:21:15.616 "adrfam": "IPv4", 00:21:15.616 "traddr": "10.0.0.1", 00:21:15.616 "trsvcid": "44416" 00:21:15.616 }, 00:21:15.616 "auth": { 00:21:15.616 "state": "completed", 00:21:15.616 "digest": "sha384", 00:21:15.616 "dhgroup": "ffdhe3072" 00:21:15.616 } 00:21:15.616 } 00:21:15.616 ]' 00:21:15.616 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.616 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:15.616 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:15.616 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:15.616 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:15.616 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.616 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.616 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.875 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MmYwMmZmMGRlODJhNmNhZWNhYTc5NzFlNWU2NmM5M2Ha7+5C: --dhchap-ctrl-secret DHHC-1:02:ZDA4MjM5MmY4N2MxNDU2YTRlOWQ2ODUzNmVhNTFjNzY2NTZhNTkwNGI0NWE4NWFlSXSZMQ==: 00:21:16.444 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.444 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:16.444 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.444 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.444 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.444 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:16.444 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:16.444 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:16.702 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:21:16.702 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:16.702 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:16.703 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:16.703 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:16.703 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.703 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.703 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.703 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.703 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.703 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.703 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.962 00:21:16.962 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:16.962 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:16.962 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.962 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.962 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.962 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.962 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.962 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.962 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:16.962 { 00:21:16.962 "cntlid": 69, 00:21:16.962 "qid": 0, 00:21:16.962 "state": "enabled", 00:21:16.962 "thread": "nvmf_tgt_poll_group_000", 00:21:16.962 "listen_address": { 00:21:16.962 "trtype": "TCP", 00:21:16.962 "adrfam": "IPv4", 00:21:16.962 "traddr": "10.0.0.2", 00:21:16.962 "trsvcid": "4420" 00:21:16.962 }, 00:21:16.962 "peer_address": { 00:21:16.962 "trtype": "TCP", 00:21:16.962 "adrfam": "IPv4", 00:21:16.962 "traddr": "10.0.0.1", 00:21:16.962 "trsvcid": "44444" 00:21:16.962 }, 00:21:16.962 "auth": { 00:21:16.962 "state": "completed", 00:21:16.962 "digest": "sha384", 00:21:16.962 "dhgroup": "ffdhe3072" 00:21:16.962 } 00:21:16.962 } 00:21:16.962 ]' 00:21:16.962 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:17.220 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.220 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:17.220 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:17.220 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:17.220 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.220 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.220 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.479 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTA2YjAxOTE3YzkxMzY0NTQ0ZGEzNTczMDNhNDQ5MTA0ZmMzMWUxOTA2NTA5ZWEyc3lZCg==: --dhchap-ctrl-secret DHHC-1:01:OWM4YWNkMTgxNmZiMGY4OTIwZDNkYWRlMmU3YzZmOTLzx0Fz: 00:21:18.048 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.048 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:18.048 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.048 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.048 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.048 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:18.048 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:18.048 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:18.048 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:21:18.048 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:18.048 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:18.048 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:18.048 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:18.048 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.048 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:21:18.048 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.048 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.048 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.048 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:18.048 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:18.307 00:21:18.307 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:18.307 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.307 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.565 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.565 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.565 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.566 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.566 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.566 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.566 { 00:21:18.566 "cntlid": 71, 00:21:18.566 "qid": 0, 00:21:18.566 "state": "enabled", 00:21:18.566 "thread": "nvmf_tgt_poll_group_000", 00:21:18.566 "listen_address": { 00:21:18.566 "trtype": "TCP", 00:21:18.566 "adrfam": "IPv4", 00:21:18.566 "traddr": "10.0.0.2", 00:21:18.566 "trsvcid": "4420" 00:21:18.566 }, 00:21:18.566 "peer_address": { 00:21:18.566 "trtype": "TCP", 00:21:18.566 "adrfam": "IPv4", 00:21:18.566 "traddr": "10.0.0.1", 00:21:18.566 "trsvcid": "44472" 00:21:18.566 }, 00:21:18.566 "auth": { 00:21:18.566 "state": "completed", 00:21:18.566 "digest": "sha384", 00:21:18.566 "dhgroup": "ffdhe3072" 00:21:18.566 } 00:21:18.566 } 00:21:18.566 ]' 00:21:18.566 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.566 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:18.566 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.566 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:18.566 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.566 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.566 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.566 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.824 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzA4M2ZlOWRmNWMxODNiZTIxNDQ4MGMxYWQ3OThmY2U5OWQ1ZjA2YzNlYzE2Zjc1Zjc4YzY1ZTgzNGM4NTU2M2WfuUc=: 00:21:19.393 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.393 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:19.393 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.393 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.393 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.393 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.393 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.393 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:19.393 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:19.652 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:21:19.652 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.652 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:19.652 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:19.652 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:19.652 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.652 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.652 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.652 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.652 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.652 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.652 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.911 00:21:19.911 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:19.911 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:19.911 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.911 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.911 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.911 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.911 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.911 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.911 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:19.911 { 00:21:19.911 "cntlid": 73, 00:21:19.911 "qid": 0, 00:21:19.911 "state": "enabled", 00:21:19.911 "thread": "nvmf_tgt_poll_group_000", 00:21:19.911 "listen_address": { 00:21:19.911 "trtype": "TCP", 00:21:19.911 "adrfam": "IPv4", 00:21:19.911 "traddr": "10.0.0.2", 00:21:19.911 "trsvcid": "4420" 00:21:19.911 }, 00:21:19.911 "peer_address": { 00:21:19.911 "trtype": "TCP", 00:21:19.911 "adrfam": "IPv4", 00:21:19.911 "traddr": "10.0.0.1", 00:21:19.911 "trsvcid": "36342" 00:21:19.911 }, 00:21:19.911 "auth": { 00:21:19.911 "state": "completed", 00:21:19.911 "digest": "sha384", 00:21:19.911 "dhgroup": "ffdhe4096" 00:21:19.911 } 00:21:19.911 } 00:21:19.911 ]' 00:21:19.911 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:20.171 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:20.171 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.171 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:20.171 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:20.171 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.171 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.171 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.430 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MzFkOGE5MzI5NGE0ZjRlZjgxNDYxMjg2YmIxNWQzM2EwNjZjY2ZhNjkwZmE3ZjI0MLzYDQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiZGM5ODNmYWRiMGQxNjg2MGIzZDRjYWM2MjI3NjBmOWU0YmM1ZGM0Yzg4ZTU4OTg0YTEwYWM2NmVlZTQ1M87WMIU=: 00:21:20.998 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.998 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:20.998 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.998 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.998 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.998 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:20.998 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:20.998 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:20.998 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:21:20.998 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:20.998 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:20.998 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:20.998 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:20.998 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.998 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.998 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.998 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.998 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.998 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.998 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.258 00:21:21.258 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:21.258 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:21.258 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.517 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.517 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.517 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.517 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.517 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.517 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:21.517 { 00:21:21.517 "cntlid": 75, 00:21:21.517 "qid": 0, 00:21:21.517 "state": "enabled", 00:21:21.517 "thread": "nvmf_tgt_poll_group_000", 00:21:21.517 "listen_address": { 00:21:21.517 "trtype": "TCP", 00:21:21.517 "adrfam": "IPv4", 00:21:21.517 "traddr": "10.0.0.2", 00:21:21.517 "trsvcid": "4420" 00:21:21.517 }, 00:21:21.517 "peer_address": { 00:21:21.517 "trtype": "TCP", 00:21:21.517 "adrfam": "IPv4", 00:21:21.517 "traddr": "10.0.0.1", 00:21:21.517 "trsvcid": "36354" 00:21:21.517 }, 00:21:21.517 "auth": { 00:21:21.517 "state": "completed", 00:21:21.517 "digest": "sha384", 00:21:21.517 "dhgroup": "ffdhe4096" 00:21:21.517 } 00:21:21.517 } 00:21:21.517 ]' 00:21:21.517 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:21.517 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:21.517 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:21.517 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:21.517 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:21.776 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.776 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.776 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.776 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MmYwMmZmMGRlODJhNmNhZWNhYTc5NzFlNWU2NmM5M2Ha7+5C: --dhchap-ctrl-secret DHHC-1:02:ZDA4MjM5MmY4N2MxNDU2YTRlOWQ2ODUzNmVhNTFjNzY2NTZhNTkwNGI0NWE4NWFlSXSZMQ==: 00:21:22.344 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.344 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:22.344 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.344 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.344 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.344 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:22.344 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:22.344 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:22.603 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:21:22.603 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:22.603 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:22.603 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:22.603 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:22.603 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.604 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.604 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.604 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.604 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.604 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.604 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.862 00:21:22.863 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:22.863 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:22.863 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.122 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.122 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.122 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.122 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.122 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.122 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:23.122 { 00:21:23.122 "cntlid": 77, 00:21:23.122 "qid": 0, 00:21:23.122 "state": "enabled", 00:21:23.122 "thread": "nvmf_tgt_poll_group_000", 00:21:23.122 "listen_address": { 00:21:23.122 "trtype": "TCP", 00:21:23.122 "adrfam": "IPv4", 00:21:23.122 "traddr": "10.0.0.2", 00:21:23.122 "trsvcid": "4420" 00:21:23.122 }, 00:21:23.122 "peer_address": { 00:21:23.122 "trtype": "TCP", 00:21:23.122 "adrfam": "IPv4", 00:21:23.122 "traddr": "10.0.0.1", 00:21:23.122 "trsvcid": "36376" 00:21:23.122 }, 00:21:23.122 "auth": { 00:21:23.122 "state": "completed", 00:21:23.123 "digest": "sha384", 00:21:23.123 "dhgroup": "ffdhe4096" 00:21:23.123 } 00:21:23.123 } 00:21:23.123 ]' 00:21:23.123 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:23.123 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:23.123 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:23.123 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:23.123 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:23.123 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.123 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.123 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.383 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTA2YjAxOTE3YzkxMzY0NTQ0ZGEzNTczMDNhNDQ5MTA0ZmMzMWUxOTA2NTA5ZWEyc3lZCg==: --dhchap-ctrl-secret DHHC-1:01:OWM4YWNkMTgxNmZiMGY4OTIwZDNkYWRlMmU3YzZmOTLzx0Fz: 00:21:23.951 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.951 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:23.951 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.951 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.951 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.951 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.951 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:23.951 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:24.209 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:21:24.209 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:24.209 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:24.209 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:24.209 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:24.209 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.209 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:21:24.209 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.209 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.209 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.209 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:24.209 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:24.468 00:21:24.468 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.468 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.468 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.468 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.468 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.468 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.468 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.468 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.468 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.468 { 00:21:24.468 "cntlid": 79, 00:21:24.468 "qid": 0, 00:21:24.468 "state": "enabled", 00:21:24.468 "thread": "nvmf_tgt_poll_group_000", 00:21:24.468 "listen_address": { 00:21:24.468 "trtype": "TCP", 00:21:24.468 "adrfam": "IPv4", 00:21:24.468 "traddr": "10.0.0.2", 00:21:24.468 "trsvcid": "4420" 00:21:24.468 }, 00:21:24.468 "peer_address": { 00:21:24.468 "trtype": "TCP", 00:21:24.468 "adrfam": "IPv4", 00:21:24.468 "traddr": "10.0.0.1", 00:21:24.468 "trsvcid": "36406" 00:21:24.468 }, 00:21:24.468 "auth": { 00:21:24.468 "state": "completed", 00:21:24.468 "digest": "sha384", 00:21:24.468 "dhgroup": "ffdhe4096" 00:21:24.468 } 00:21:24.468 } 00:21:24.468 ]' 00:21:24.468 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.727 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:24.727 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.727 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:24.727 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.727 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.727 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.727 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.987 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzA4M2ZlOWRmNWMxODNiZTIxNDQ4MGMxYWQ3OThmY2U5OWQ1ZjA2YzNlYzE2Zjc1Zjc4YzY1ZTgzNGM4NTU2M2WfuUc=: 00:21:25.556 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.556 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:25.556 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.556 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.556 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.556 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:25.556 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:25.556 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:25.556 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:25.556 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:21:25.556 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:25.556 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:25.556 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:25.556 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:25.556 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.556 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.556 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.556 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.556 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.556 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.556 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.125 00:21:26.125 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.125 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.125 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.125 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.125 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.125 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.125 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.125 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.125 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.125 { 00:21:26.125 "cntlid": 81, 00:21:26.125 "qid": 0, 00:21:26.125 "state": "enabled", 00:21:26.125 "thread": "nvmf_tgt_poll_group_000", 00:21:26.125 "listen_address": { 00:21:26.125 "trtype": "TCP", 00:21:26.125 "adrfam": "IPv4", 00:21:26.125 "traddr": "10.0.0.2", 00:21:26.125 "trsvcid": "4420" 00:21:26.125 }, 00:21:26.125 "peer_address": { 00:21:26.125 "trtype": "TCP", 00:21:26.125 "adrfam": "IPv4", 00:21:26.125 "traddr": "10.0.0.1", 00:21:26.125 "trsvcid": "36420" 00:21:26.125 }, 00:21:26.125 "auth": { 00:21:26.125 "state": "completed", 00:21:26.125 "digest": "sha384", 00:21:26.125 "dhgroup": "ffdhe6144" 00:21:26.125 } 00:21:26.125 } 00:21:26.125 ]' 00:21:26.125 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.125 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:26.125 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:26.125 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:26.125 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:26.385 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.385 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.385 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.385 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MzFkOGE5MzI5NGE0ZjRlZjgxNDYxMjg2YmIxNWQzM2EwNjZjY2ZhNjkwZmE3ZjI0MLzYDQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiZGM5ODNmYWRiMGQxNjg2MGIzZDRjYWM2MjI3NjBmOWU0YmM1ZGM0Yzg4ZTU4OTg0YTEwYWM2NmVlZTQ1M87WMIU=: 00:21:26.955 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.955 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:26.955 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.955 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.955 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.955 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:26.955 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:26.955 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:27.214 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:21:27.215 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:27.215 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:27.215 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:27.215 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:27.215 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.215 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.215 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.215 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.215 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.215 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.215 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.475 00:21:27.475 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:27.475 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:27.475 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.736 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.736 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.736 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.736 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.736 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.736 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:27.736 { 00:21:27.736 "cntlid": 83, 00:21:27.736 "qid": 0, 00:21:27.736 "state": "enabled", 00:21:27.736 "thread": "nvmf_tgt_poll_group_000", 00:21:27.736 "listen_address": { 00:21:27.736 "trtype": "TCP", 00:21:27.736 "adrfam": "IPv4", 00:21:27.736 "traddr": "10.0.0.2", 00:21:27.736 "trsvcid": "4420" 00:21:27.736 }, 00:21:27.736 "peer_address": { 00:21:27.736 "trtype": "TCP", 00:21:27.736 "adrfam": "IPv4", 00:21:27.736 "traddr": "10.0.0.1", 00:21:27.736 "trsvcid": "36460" 00:21:27.736 }, 00:21:27.736 "auth": { 00:21:27.736 "state": "completed", 00:21:27.736 "digest": "sha384", 00:21:27.736 "dhgroup": "ffdhe6144" 00:21:27.736 } 00:21:27.736 } 00:21:27.736 ]' 00:21:27.736 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:27.736 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:27.736 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.736 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:27.736 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.736 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.736 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.736 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.003 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MmYwMmZmMGRlODJhNmNhZWNhYTc5NzFlNWU2NmM5M2Ha7+5C: --dhchap-ctrl-secret DHHC-1:02:ZDA4MjM5MmY4N2MxNDU2YTRlOWQ2ODUzNmVhNTFjNzY2NTZhNTkwNGI0NWE4NWFlSXSZMQ==: 00:21:28.571 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.571 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:28.571 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.571 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.571 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.571 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.571 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:28.571 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:28.830 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:21:28.830 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.830 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:28.830 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:28.830 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:28.830 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.830 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.830 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.830 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.830 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.830 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.830 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.087 00:21:29.087 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.087 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.087 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.345 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.345 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.345 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.345 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.345 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.345 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.345 { 00:21:29.345 "cntlid": 85, 00:21:29.345 "qid": 0, 00:21:29.345 "state": "enabled", 00:21:29.345 "thread": "nvmf_tgt_poll_group_000", 00:21:29.345 "listen_address": { 00:21:29.345 "trtype": "TCP", 00:21:29.345 "adrfam": "IPv4", 00:21:29.345 "traddr": "10.0.0.2", 00:21:29.345 "trsvcid": "4420" 00:21:29.345 }, 00:21:29.345 "peer_address": { 00:21:29.345 "trtype": "TCP", 00:21:29.345 "adrfam": "IPv4", 00:21:29.345 "traddr": "10.0.0.1", 00:21:29.345 "trsvcid": "36492" 00:21:29.345 }, 00:21:29.345 "auth": { 00:21:29.345 "state": "completed", 00:21:29.345 "digest": "sha384", 00:21:29.345 "dhgroup": "ffdhe6144" 00:21:29.345 } 00:21:29.345 } 00:21:29.345 ]' 00:21:29.345 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.345 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:29.345 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.345 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:29.345 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.345 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.345 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.345 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.603 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTA2YjAxOTE3YzkxMzY0NTQ0ZGEzNTczMDNhNDQ5MTA0ZmMzMWUxOTA2NTA5ZWEyc3lZCg==: --dhchap-ctrl-secret DHHC-1:01:OWM4YWNkMTgxNmZiMGY4OTIwZDNkYWRlMmU3YzZmOTLzx0Fz: 00:21:30.169 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.169 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:30.169 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.169 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.169 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.169 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.169 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:30.169 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:30.169 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:21:30.169 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.169 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:30.169 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:30.169 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:30.169 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.169 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:21:30.169 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.169 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.169 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.169 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.169 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.737 00:21:30.737 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:30.737 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:30.737 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.737 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.737 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.737 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.738 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.738 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.738 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:30.738 { 00:21:30.738 "cntlid": 87, 00:21:30.738 "qid": 0, 00:21:30.738 "state": "enabled", 00:21:30.738 "thread": "nvmf_tgt_poll_group_000", 00:21:30.738 "listen_address": { 00:21:30.738 "trtype": "TCP", 00:21:30.738 "adrfam": "IPv4", 00:21:30.738 "traddr": "10.0.0.2", 00:21:30.738 "trsvcid": "4420" 00:21:30.738 }, 00:21:30.738 "peer_address": { 00:21:30.738 "trtype": "TCP", 00:21:30.738 "adrfam": "IPv4", 00:21:30.738 "traddr": "10.0.0.1", 00:21:30.738 "trsvcid": "41936" 00:21:30.738 }, 00:21:30.738 "auth": { 00:21:30.738 "state": "completed", 00:21:30.738 "digest": "sha384", 00:21:30.738 "dhgroup": "ffdhe6144" 00:21:30.738 } 00:21:30.738 } 00:21:30.738 ]' 00:21:30.738 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:30.738 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:30.738 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:30.997 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:30.997 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:30.997 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.997 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.997 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.255 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzA4M2ZlOWRmNWMxODNiZTIxNDQ4MGMxYWQ3OThmY2U5OWQ1ZjA2YzNlYzE2Zjc1Zjc4YzY1ZTgzNGM4NTU2M2WfuUc=: 00:21:31.515 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.774 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:31.774 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.774 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.774 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.774 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:31.774 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:31.774 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:31.774 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:31.774 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:21:31.774 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.774 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:31.774 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:31.774 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:31.774 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.774 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.774 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.774 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.774 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.774 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.774 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.342 00:21:32.342 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:32.342 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:32.342 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.601 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.601 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.601 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.601 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.601 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.601 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:32.601 { 00:21:32.601 "cntlid": 89, 00:21:32.601 "qid": 0, 00:21:32.601 "state": "enabled", 00:21:32.601 "thread": "nvmf_tgt_poll_group_000", 00:21:32.601 "listen_address": { 00:21:32.601 "trtype": "TCP", 00:21:32.601 "adrfam": "IPv4", 00:21:32.601 "traddr": "10.0.0.2", 00:21:32.601 "trsvcid": "4420" 00:21:32.601 }, 00:21:32.601 "peer_address": { 00:21:32.601 "trtype": "TCP", 00:21:32.601 "adrfam": "IPv4", 00:21:32.601 "traddr": "10.0.0.1", 00:21:32.601 "trsvcid": "41966" 00:21:32.601 }, 00:21:32.601 "auth": { 00:21:32.601 "state": "completed", 00:21:32.601 "digest": "sha384", 00:21:32.601 "dhgroup": "ffdhe8192" 00:21:32.601 } 00:21:32.601 } 00:21:32.601 ]' 00:21:32.601 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:32.601 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:32.601 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:32.601 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:32.601 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:32.601 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.601 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.601 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.860 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MzFkOGE5MzI5NGE0ZjRlZjgxNDYxMjg2YmIxNWQzM2EwNjZjY2ZhNjkwZmE3ZjI0MLzYDQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiZGM5ODNmYWRiMGQxNjg2MGIzZDRjYWM2MjI3NjBmOWU0YmM1ZGM0Yzg4ZTU4OTg0YTEwYWM2NmVlZTQ1M87WMIU=: 00:21:33.428 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.428 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:33.428 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.428 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.428 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.428 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.428 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:33.428 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:33.428 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:21:33.428 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.428 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:33.428 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:33.428 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:33.428 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.428 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.428 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.428 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.428 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.429 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.429 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.997 00:21:33.997 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:33.997 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:33.997 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.255 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.255 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.255 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.255 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.255 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.255 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.255 { 00:21:34.255 "cntlid": 91, 00:21:34.255 "qid": 0, 00:21:34.255 "state": "enabled", 00:21:34.255 "thread": "nvmf_tgt_poll_group_000", 00:21:34.255 "listen_address": { 00:21:34.255 "trtype": "TCP", 00:21:34.255 "adrfam": "IPv4", 00:21:34.255 "traddr": "10.0.0.2", 00:21:34.255 "trsvcid": "4420" 00:21:34.255 }, 00:21:34.255 "peer_address": { 00:21:34.255 "trtype": "TCP", 00:21:34.255 "adrfam": "IPv4", 00:21:34.255 "traddr": "10.0.0.1", 00:21:34.255 "trsvcid": "41986" 00:21:34.255 }, 00:21:34.255 "auth": { 00:21:34.255 "state": "completed", 00:21:34.255 "digest": "sha384", 00:21:34.255 "dhgroup": "ffdhe8192" 00:21:34.255 } 00:21:34.255 } 00:21:34.255 ]' 00:21:34.255 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.255 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:34.255 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.255 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:34.255 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:34.255 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.255 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.255 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.513 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MmYwMmZmMGRlODJhNmNhZWNhYTc5NzFlNWU2NmM5M2Ha7+5C: --dhchap-ctrl-secret DHHC-1:02:ZDA4MjM5MmY4N2MxNDU2YTRlOWQ2ODUzNmVhNTFjNzY2NTZhNTkwNGI0NWE4NWFlSXSZMQ==: 00:21:35.081 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.081 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:35.081 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.081 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.081 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.081 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.081 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:35.081 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:35.081 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:21:35.081 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.081 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:35.081 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:35.081 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:35.081 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.081 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.081 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.081 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.340 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.340 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.340 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.598 00:21:35.598 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:35.598 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:35.598 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.857 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.857 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.857 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.857 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.857 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.857 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:35.857 { 00:21:35.857 "cntlid": 93, 00:21:35.857 "qid": 0, 00:21:35.857 "state": "enabled", 00:21:35.857 "thread": "nvmf_tgt_poll_group_000", 00:21:35.857 "listen_address": { 00:21:35.857 "trtype": "TCP", 00:21:35.857 "adrfam": "IPv4", 00:21:35.857 "traddr": "10.0.0.2", 00:21:35.857 "trsvcid": "4420" 00:21:35.857 }, 00:21:35.857 "peer_address": { 00:21:35.857 "trtype": "TCP", 00:21:35.857 "adrfam": "IPv4", 00:21:35.857 "traddr": "10.0.0.1", 00:21:35.857 "trsvcid": "42016" 00:21:35.857 }, 00:21:35.857 "auth": { 00:21:35.857 "state": "completed", 00:21:35.857 "digest": "sha384", 00:21:35.857 "dhgroup": "ffdhe8192" 00:21:35.857 } 00:21:35.857 } 00:21:35.857 ]' 00:21:35.857 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:35.857 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:35.857 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:35.857 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:35.857 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:35.857 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.857 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.857 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.116 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTA2YjAxOTE3YzkxMzY0NTQ0ZGEzNTczMDNhNDQ5MTA0ZmMzMWUxOTA2NTA5ZWEyc3lZCg==: --dhchap-ctrl-secret DHHC-1:01:OWM4YWNkMTgxNmZiMGY4OTIwZDNkYWRlMmU3YzZmOTLzx0Fz: 00:21:36.683 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.683 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:36.683 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.683 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.683 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.683 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:36.683 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:36.683 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:36.941 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:36.941 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:36.941 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:36.941 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:36.941 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:36.941 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.941 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:21:36.941 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.941 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.942 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.942 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:36.942 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:37.509 00:21:37.509 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.509 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.509 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.509 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.509 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.509 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.509 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.509 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.509 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:37.509 { 00:21:37.509 "cntlid": 95, 00:21:37.509 "qid": 0, 00:21:37.509 "state": "enabled", 00:21:37.509 "thread": "nvmf_tgt_poll_group_000", 00:21:37.509 "listen_address": { 00:21:37.509 "trtype": "TCP", 00:21:37.509 "adrfam": "IPv4", 00:21:37.509 "traddr": "10.0.0.2", 00:21:37.509 "trsvcid": "4420" 00:21:37.509 }, 00:21:37.509 "peer_address": { 00:21:37.509 "trtype": "TCP", 00:21:37.509 "adrfam": "IPv4", 00:21:37.509 "traddr": "10.0.0.1", 00:21:37.509 "trsvcid": "42032" 00:21:37.509 }, 00:21:37.509 "auth": { 00:21:37.509 "state": "completed", 00:21:37.509 "digest": "sha384", 00:21:37.509 "dhgroup": "ffdhe8192" 00:21:37.509 } 00:21:37.509 } 00:21:37.509 ]' 00:21:37.509 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:37.509 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:37.509 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:37.768 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:37.768 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:37.768 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.768 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.768 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.768 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzA4M2ZlOWRmNWMxODNiZTIxNDQ4MGMxYWQ3OThmY2U5OWQ1ZjA2YzNlYzE2Zjc1Zjc4YzY1ZTgzNGM4NTU2M2WfuUc=: 00:21:38.337 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.337 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:38.337 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.337 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.337 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.337 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:38.337 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:38.337 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:38.337 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:38.337 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:38.597 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:38.597 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:38.597 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:38.597 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:38.597 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:38.597 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.597 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.597 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.597 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.597 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.597 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.597 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.856 00:21:38.856 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:38.856 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:38.856 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.115 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.115 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.116 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.116 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.116 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.116 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:39.116 { 00:21:39.116 "cntlid": 97, 00:21:39.116 "qid": 0, 00:21:39.116 "state": "enabled", 00:21:39.116 "thread": "nvmf_tgt_poll_group_000", 00:21:39.116 "listen_address": { 00:21:39.116 "trtype": "TCP", 00:21:39.116 "adrfam": "IPv4", 00:21:39.116 "traddr": "10.0.0.2", 00:21:39.116 "trsvcid": "4420" 00:21:39.116 }, 00:21:39.116 "peer_address": { 00:21:39.116 "trtype": "TCP", 00:21:39.116 "adrfam": "IPv4", 00:21:39.116 "traddr": "10.0.0.1", 00:21:39.116 "trsvcid": "42054" 00:21:39.116 }, 00:21:39.116 "auth": { 00:21:39.116 "state": "completed", 00:21:39.116 "digest": "sha512", 00:21:39.116 "dhgroup": "null" 00:21:39.116 } 00:21:39.116 } 00:21:39.116 ]' 00:21:39.116 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:39.116 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.116 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:39.116 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:39.116 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:39.116 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.116 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.116 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.375 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MzFkOGE5MzI5NGE0ZjRlZjgxNDYxMjg2YmIxNWQzM2EwNjZjY2ZhNjkwZmE3ZjI0MLzYDQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiZGM5ODNmYWRiMGQxNjg2MGIzZDRjYWM2MjI3NjBmOWU0YmM1ZGM0Yzg4ZTU4OTg0YTEwYWM2NmVlZTQ1M87WMIU=: 00:21:39.945 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.945 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:39.945 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.945 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.945 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.945 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:39.945 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:39.945 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:40.204 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:40.204 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:40.204 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:40.204 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:40.204 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:40.204 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.204 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.204 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.204 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.204 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.204 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.204 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.204 00:21:40.204 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:40.204 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.204 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:40.464 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.464 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.464 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.464 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.464 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.464 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:40.464 { 00:21:40.464 "cntlid": 99, 00:21:40.464 "qid": 0, 00:21:40.464 "state": "enabled", 00:21:40.464 "thread": "nvmf_tgt_poll_group_000", 00:21:40.464 "listen_address": { 00:21:40.464 "trtype": "TCP", 00:21:40.464 "adrfam": "IPv4", 00:21:40.464 "traddr": "10.0.0.2", 00:21:40.464 "trsvcid": "4420" 00:21:40.464 }, 00:21:40.464 "peer_address": { 00:21:40.464 "trtype": "TCP", 00:21:40.464 "adrfam": "IPv4", 00:21:40.464 "traddr": "10.0.0.1", 00:21:40.464 "trsvcid": "56204" 00:21:40.464 }, 00:21:40.464 "auth": { 00:21:40.464 "state": "completed", 00:21:40.464 "digest": "sha512", 00:21:40.464 "dhgroup": "null" 00:21:40.464 } 00:21:40.464 } 00:21:40.464 ]' 00:21:40.464 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:40.464 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.464 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:40.464 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:40.723 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:40.723 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.723 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.723 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.723 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MmYwMmZmMGRlODJhNmNhZWNhYTc5NzFlNWU2NmM5M2Ha7+5C: --dhchap-ctrl-secret DHHC-1:02:ZDA4MjM5MmY4N2MxNDU2YTRlOWQ2ODUzNmVhNTFjNzY2NTZhNTkwNGI0NWE4NWFlSXSZMQ==: 00:21:41.290 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.290 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:41.290 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.290 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.290 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.290 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:41.290 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:41.290 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:41.549 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:41.549 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:41.549 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:41.549 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:41.549 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:41.549 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.549 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.549 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.549 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.549 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.549 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.549 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.808 00:21:41.808 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:41.808 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:41.808 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.068 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.068 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.068 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.068 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.068 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.068 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:42.068 { 00:21:42.068 "cntlid": 101, 00:21:42.068 "qid": 0, 00:21:42.068 "state": "enabled", 00:21:42.068 "thread": "nvmf_tgt_poll_group_000", 00:21:42.068 "listen_address": { 00:21:42.068 "trtype": "TCP", 00:21:42.068 "adrfam": "IPv4", 00:21:42.068 "traddr": "10.0.0.2", 00:21:42.068 "trsvcid": "4420" 00:21:42.068 }, 00:21:42.068 "peer_address": { 00:21:42.068 "trtype": "TCP", 00:21:42.068 "adrfam": "IPv4", 00:21:42.068 "traddr": "10.0.0.1", 00:21:42.068 "trsvcid": "56230" 00:21:42.068 }, 00:21:42.068 "auth": { 00:21:42.068 "state": "completed", 00:21:42.068 "digest": "sha512", 00:21:42.068 "dhgroup": "null" 00:21:42.068 } 00:21:42.068 } 00:21:42.068 ]' 00:21:42.068 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:42.068 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.068 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:42.068 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:42.068 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:42.068 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.068 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.068 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.391 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTA2YjAxOTE3YzkxMzY0NTQ0ZGEzNTczMDNhNDQ5MTA0ZmMzMWUxOTA2NTA5ZWEyc3lZCg==: --dhchap-ctrl-secret DHHC-1:01:OWM4YWNkMTgxNmZiMGY4OTIwZDNkYWRlMmU3YzZmOTLzx0Fz: 00:21:42.959 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.959 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:42.959 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.959 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.959 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.959 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:42.959 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:42.959 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:42.959 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:42.959 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.959 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:42.959 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:42.959 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:42.959 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.959 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:21:42.959 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.959 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.959 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.959 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:42.959 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:43.218 00:21:43.218 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:43.218 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:43.218 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.478 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.478 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.478 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.478 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.478 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.478 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:43.478 { 00:21:43.478 "cntlid": 103, 00:21:43.478 "qid": 0, 00:21:43.478 "state": "enabled", 00:21:43.478 "thread": "nvmf_tgt_poll_group_000", 00:21:43.478 "listen_address": { 00:21:43.478 "trtype": "TCP", 00:21:43.478 "adrfam": "IPv4", 00:21:43.478 "traddr": "10.0.0.2", 00:21:43.478 "trsvcid": "4420" 00:21:43.478 }, 00:21:43.478 "peer_address": { 00:21:43.478 "trtype": "TCP", 00:21:43.478 "adrfam": "IPv4", 00:21:43.478 "traddr": "10.0.0.1", 00:21:43.478 "trsvcid": "56262" 00:21:43.478 }, 00:21:43.478 "auth": { 00:21:43.478 "state": "completed", 00:21:43.478 "digest": "sha512", 00:21:43.478 "dhgroup": "null" 00:21:43.478 } 00:21:43.478 } 00:21:43.478 ]' 00:21:43.478 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:43.478 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.478 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:43.478 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:43.479 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:43.479 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.479 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.479 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.738 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzA4M2ZlOWRmNWMxODNiZTIxNDQ4MGMxYWQ3OThmY2U5OWQ1ZjA2YzNlYzE2Zjc1Zjc4YzY1ZTgzNGM4NTU2M2WfuUc=: 00:21:44.307 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.307 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:44.307 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.307 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.307 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.307 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:44.307 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:44.307 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:44.307 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:44.567 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:44.567 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:44.567 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:44.567 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:44.567 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:44.567 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.567 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.567 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.567 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.567 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.567 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.567 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.827 00:21:44.827 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:44.827 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:44.827 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.827 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.827 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.827 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.827 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.827 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.827 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:44.827 { 00:21:44.827 "cntlid": 105, 00:21:44.827 "qid": 0, 00:21:44.827 "state": "enabled", 00:21:44.827 "thread": "nvmf_tgt_poll_group_000", 00:21:44.827 "listen_address": { 00:21:44.827 "trtype": "TCP", 00:21:44.827 "adrfam": "IPv4", 00:21:44.827 "traddr": "10.0.0.2", 00:21:44.827 "trsvcid": "4420" 00:21:44.827 }, 00:21:44.827 "peer_address": { 00:21:44.827 "trtype": "TCP", 00:21:44.827 "adrfam": "IPv4", 00:21:44.827 "traddr": "10.0.0.1", 00:21:44.827 "trsvcid": "56284" 00:21:44.827 }, 00:21:44.827 "auth": { 00:21:44.827 "state": "completed", 00:21:44.827 "digest": "sha512", 00:21:44.827 "dhgroup": "ffdhe2048" 00:21:44.827 } 00:21:44.827 } 00:21:44.827 ]' 00:21:44.827 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:44.827 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.827 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:45.087 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:45.087 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:45.087 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.087 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.087 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.087 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MzFkOGE5MzI5NGE0ZjRlZjgxNDYxMjg2YmIxNWQzM2EwNjZjY2ZhNjkwZmE3ZjI0MLzYDQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiZGM5ODNmYWRiMGQxNjg2MGIzZDRjYWM2MjI3NjBmOWU0YmM1ZGM0Yzg4ZTU4OTg0YTEwYWM2NmVlZTQ1M87WMIU=: 00:21:45.655 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.655 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:45.655 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.655 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.655 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.655 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:45.655 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:45.655 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:45.914 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:45.914 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:45.914 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:45.914 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:45.914 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:45.914 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.914 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.915 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.915 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.915 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.915 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.915 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.174 00:21:46.174 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:46.174 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:46.174 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.433 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.433 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.433 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.433 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.433 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.433 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:46.433 { 00:21:46.433 "cntlid": 107, 00:21:46.433 "qid": 0, 00:21:46.433 "state": "enabled", 00:21:46.433 "thread": "nvmf_tgt_poll_group_000", 00:21:46.433 "listen_address": { 00:21:46.433 "trtype": "TCP", 00:21:46.433 "adrfam": "IPv4", 00:21:46.434 "traddr": "10.0.0.2", 00:21:46.434 "trsvcid": "4420" 00:21:46.434 }, 00:21:46.434 "peer_address": { 00:21:46.434 "trtype": "TCP", 00:21:46.434 "adrfam": "IPv4", 00:21:46.434 "traddr": "10.0.0.1", 00:21:46.434 "trsvcid": "56306" 00:21:46.434 }, 00:21:46.434 "auth": { 00:21:46.434 "state": "completed", 00:21:46.434 "digest": "sha512", 00:21:46.434 "dhgroup": "ffdhe2048" 00:21:46.434 } 00:21:46.434 } 00:21:46.434 ]' 00:21:46.434 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:46.434 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.434 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:46.434 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:46.434 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:46.434 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.434 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.434 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.694 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MmYwMmZmMGRlODJhNmNhZWNhYTc5NzFlNWU2NmM5M2Ha7+5C: --dhchap-ctrl-secret DHHC-1:02:ZDA4MjM5MmY4N2MxNDU2YTRlOWQ2ODUzNmVhNTFjNzY2NTZhNTkwNGI0NWE4NWFlSXSZMQ==: 00:21:47.261 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.261 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:47.261 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.261 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.261 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.261 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:47.261 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:47.261 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:47.261 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:47.261 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:47.261 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:47.261 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:47.261 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:47.261 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.261 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.261 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.261 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.261 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.261 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.261 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.520 00:21:47.520 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:47.520 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:47.520 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.779 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.779 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.779 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.779 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.779 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.779 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:47.779 { 00:21:47.779 "cntlid": 109, 00:21:47.779 "qid": 0, 00:21:47.779 "state": "enabled", 00:21:47.779 "thread": "nvmf_tgt_poll_group_000", 00:21:47.779 "listen_address": { 00:21:47.779 "trtype": "TCP", 00:21:47.779 "adrfam": "IPv4", 00:21:47.779 "traddr": "10.0.0.2", 00:21:47.779 "trsvcid": "4420" 00:21:47.779 }, 00:21:47.779 "peer_address": { 00:21:47.779 "trtype": "TCP", 00:21:47.779 "adrfam": "IPv4", 00:21:47.779 "traddr": "10.0.0.1", 00:21:47.779 "trsvcid": "56334" 00:21:47.779 }, 00:21:47.779 "auth": { 00:21:47.779 "state": "completed", 00:21:47.779 "digest": "sha512", 00:21:47.779 "dhgroup": "ffdhe2048" 00:21:47.779 } 00:21:47.779 } 00:21:47.779 ]' 00:21:47.779 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:47.779 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.779 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:47.779 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:47.779 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:48.038 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.039 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.039 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.039 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTA2YjAxOTE3YzkxMzY0NTQ0ZGEzNTczMDNhNDQ5MTA0ZmMzMWUxOTA2NTA5ZWEyc3lZCg==: --dhchap-ctrl-secret DHHC-1:01:OWM4YWNkMTgxNmZiMGY4OTIwZDNkYWRlMmU3YzZmOTLzx0Fz: 00:21:48.607 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.607 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:48.607 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.607 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.607 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.607 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:48.607 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:48.607 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:48.866 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:48.866 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:48.866 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:48.866 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:48.866 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:48.866 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.866 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:21:48.866 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.866 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.866 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.867 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:48.867 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:49.125 00:21:49.125 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:49.125 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.125 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:49.385 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.385 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.385 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.385 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.385 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.385 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:49.385 { 00:21:49.385 "cntlid": 111, 00:21:49.385 "qid": 0, 00:21:49.385 "state": "enabled", 00:21:49.385 "thread": "nvmf_tgt_poll_group_000", 00:21:49.385 "listen_address": { 00:21:49.385 "trtype": "TCP", 00:21:49.385 "adrfam": "IPv4", 00:21:49.385 "traddr": "10.0.0.2", 00:21:49.385 "trsvcid": "4420" 00:21:49.385 }, 00:21:49.385 "peer_address": { 00:21:49.385 "trtype": "TCP", 00:21:49.385 "adrfam": "IPv4", 00:21:49.385 "traddr": "10.0.0.1", 00:21:49.385 "trsvcid": "56372" 00:21:49.385 }, 00:21:49.385 "auth": { 00:21:49.385 "state": "completed", 00:21:49.385 "digest": "sha512", 00:21:49.385 "dhgroup": "ffdhe2048" 00:21:49.385 } 00:21:49.385 } 00:21:49.385 ]' 00:21:49.385 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:49.385 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.385 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:49.385 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:49.385 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:49.385 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.385 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.385 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.644 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzA4M2ZlOWRmNWMxODNiZTIxNDQ4MGMxYWQ3OThmY2U5OWQ1ZjA2YzNlYzE2Zjc1Zjc4YzY1ZTgzNGM4NTU2M2WfuUc=: 00:21:50.213 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.213 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:50.213 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.213 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.213 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.213 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:50.213 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:50.213 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:50.213 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:50.213 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:50.213 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:50.213 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:50.213 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:50.213 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:50.213 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.213 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.213 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.213 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.472 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.472 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.472 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.472 00:21:50.731 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:50.731 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:50.731 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.731 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.731 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.731 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.731 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.731 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.731 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:50.731 { 00:21:50.731 "cntlid": 113, 00:21:50.731 "qid": 0, 00:21:50.731 "state": "enabled", 00:21:50.731 "thread": "nvmf_tgt_poll_group_000", 00:21:50.731 "listen_address": { 00:21:50.731 "trtype": "TCP", 00:21:50.731 "adrfam": "IPv4", 00:21:50.731 "traddr": "10.0.0.2", 00:21:50.731 "trsvcid": "4420" 00:21:50.731 }, 00:21:50.731 "peer_address": { 00:21:50.731 "trtype": "TCP", 00:21:50.731 "adrfam": "IPv4", 00:21:50.731 "traddr": "10.0.0.1", 00:21:50.731 "trsvcid": "42358" 00:21:50.731 }, 00:21:50.731 "auth": { 00:21:50.731 "state": "completed", 00:21:50.731 "digest": "sha512", 00:21:50.731 "dhgroup": "ffdhe3072" 00:21:50.731 } 00:21:50.731 } 00:21:50.731 ]' 00:21:50.731 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:50.731 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.731 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:50.990 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:50.990 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:50.990 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.990 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.990 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.990 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MzFkOGE5MzI5NGE0ZjRlZjgxNDYxMjg2YmIxNWQzM2EwNjZjY2ZhNjkwZmE3ZjI0MLzYDQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiZGM5ODNmYWRiMGQxNjg2MGIzZDRjYWM2MjI3NjBmOWU0YmM1ZGM0Yzg4ZTU4OTg0YTEwYWM2NmVlZTQ1M87WMIU=: 00:21:51.559 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.559 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:51.559 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.559 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.559 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.559 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:51.559 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:51.559 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:51.819 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:51.819 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:51.819 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:51.819 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:51.819 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:51.819 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.819 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.819 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.819 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.819 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.819 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.819 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.078 00:21:52.078 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:52.078 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:52.078 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.337 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.337 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.337 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.337 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.338 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.338 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:52.338 { 00:21:52.338 "cntlid": 115, 00:21:52.338 "qid": 0, 00:21:52.338 "state": "enabled", 00:21:52.338 "thread": "nvmf_tgt_poll_group_000", 00:21:52.338 "listen_address": { 00:21:52.338 "trtype": "TCP", 00:21:52.338 "adrfam": "IPv4", 00:21:52.338 "traddr": "10.0.0.2", 00:21:52.338 "trsvcid": "4420" 00:21:52.338 }, 00:21:52.338 "peer_address": { 00:21:52.338 "trtype": "TCP", 00:21:52.338 "adrfam": "IPv4", 00:21:52.338 "traddr": "10.0.0.1", 00:21:52.338 "trsvcid": "42386" 00:21:52.338 }, 00:21:52.338 "auth": { 00:21:52.338 "state": "completed", 00:21:52.338 "digest": "sha512", 00:21:52.338 "dhgroup": "ffdhe3072" 00:21:52.338 } 00:21:52.338 } 00:21:52.338 ]' 00:21:52.338 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:52.338 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.338 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:52.338 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:52.338 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:52.338 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.338 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.338 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.597 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MmYwMmZmMGRlODJhNmNhZWNhYTc5NzFlNWU2NmM5M2Ha7+5C: --dhchap-ctrl-secret DHHC-1:02:ZDA4MjM5MmY4N2MxNDU2YTRlOWQ2ODUzNmVhNTFjNzY2NTZhNTkwNGI0NWE4NWFlSXSZMQ==: 00:21:53.166 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.166 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:53.166 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.166 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.166 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.166 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:53.166 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:53.166 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:53.426 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:53.426 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:53.426 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:53.426 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:53.426 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:53.426 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.426 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.426 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.426 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.426 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.426 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.426 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.685 00:21:53.685 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:53.685 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.685 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:53.685 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.685 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.685 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.685 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.685 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.685 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:53.685 { 00:21:53.685 "cntlid": 117, 00:21:53.685 "qid": 0, 00:21:53.685 "state": "enabled", 00:21:53.685 "thread": "nvmf_tgt_poll_group_000", 00:21:53.685 "listen_address": { 00:21:53.685 "trtype": "TCP", 00:21:53.685 "adrfam": "IPv4", 00:21:53.685 "traddr": "10.0.0.2", 00:21:53.685 "trsvcid": "4420" 00:21:53.685 }, 00:21:53.685 "peer_address": { 00:21:53.685 "trtype": "TCP", 00:21:53.685 "adrfam": "IPv4", 00:21:53.685 "traddr": "10.0.0.1", 00:21:53.685 "trsvcid": "42414" 00:21:53.685 }, 00:21:53.685 "auth": { 00:21:53.685 "state": "completed", 00:21:53.685 "digest": "sha512", 00:21:53.685 "dhgroup": "ffdhe3072" 00:21:53.685 } 00:21:53.685 } 00:21:53.685 ]' 00:21:53.685 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:53.944 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.944 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:53.944 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:53.944 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:53.944 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.944 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.944 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.203 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTA2YjAxOTE3YzkxMzY0NTQ0ZGEzNTczMDNhNDQ5MTA0ZmMzMWUxOTA2NTA5ZWEyc3lZCg==: --dhchap-ctrl-secret DHHC-1:01:OWM4YWNkMTgxNmZiMGY4OTIwZDNkYWRlMmU3YzZmOTLzx0Fz: 00:21:54.773 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.773 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:54.773 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.773 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.773 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.773 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:54.773 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:54.773 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:54.773 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:54.773 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:54.773 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:54.773 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:54.773 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:54.773 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.773 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:21:54.773 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.773 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.773 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.773 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:54.773 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:55.032 00:21:55.032 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:55.032 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.032 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:55.291 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.291 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.291 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.291 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.291 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.291 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:55.291 { 00:21:55.291 "cntlid": 119, 00:21:55.291 "qid": 0, 00:21:55.291 "state": "enabled", 00:21:55.291 "thread": "nvmf_tgt_poll_group_000", 00:21:55.291 "listen_address": { 00:21:55.291 "trtype": "TCP", 00:21:55.291 "adrfam": "IPv4", 00:21:55.291 "traddr": "10.0.0.2", 00:21:55.291 "trsvcid": "4420" 00:21:55.291 }, 00:21:55.291 "peer_address": { 00:21:55.291 "trtype": "TCP", 00:21:55.291 "adrfam": "IPv4", 00:21:55.291 "traddr": "10.0.0.1", 00:21:55.291 "trsvcid": "42440" 00:21:55.291 }, 00:21:55.291 "auth": { 00:21:55.291 "state": "completed", 00:21:55.291 "digest": "sha512", 00:21:55.291 "dhgroup": "ffdhe3072" 00:21:55.291 } 00:21:55.291 } 00:21:55.291 ]' 00:21:55.291 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:55.291 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.291 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:55.291 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:55.291 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:55.549 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.549 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.549 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.549 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzA4M2ZlOWRmNWMxODNiZTIxNDQ4MGMxYWQ3OThmY2U5OWQ1ZjA2YzNlYzE2Zjc1Zjc4YzY1ZTgzNGM4NTU2M2WfuUc=: 00:21:56.116 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.116 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:56.116 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.116 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.116 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.116 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:56.117 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:56.117 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:56.117 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:56.376 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:56.376 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:56.376 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:56.376 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:56.376 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:56.376 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.376 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.376 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.376 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.376 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.376 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.376 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.658 00:21:56.658 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:56.658 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:56.658 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.658 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.658 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.658 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.658 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.658 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.658 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:56.658 { 00:21:56.658 "cntlid": 121, 00:21:56.658 "qid": 0, 00:21:56.658 "state": "enabled", 00:21:56.658 "thread": "nvmf_tgt_poll_group_000", 00:21:56.658 "listen_address": { 00:21:56.658 "trtype": "TCP", 00:21:56.658 "adrfam": "IPv4", 00:21:56.658 "traddr": "10.0.0.2", 00:21:56.658 "trsvcid": "4420" 00:21:56.658 }, 00:21:56.658 "peer_address": { 00:21:56.658 "trtype": "TCP", 00:21:56.658 "adrfam": "IPv4", 00:21:56.658 "traddr": "10.0.0.1", 00:21:56.658 "trsvcid": "42468" 00:21:56.658 }, 00:21:56.658 "auth": { 00:21:56.658 "state": "completed", 00:21:56.658 "digest": "sha512", 00:21:56.658 "dhgroup": "ffdhe4096" 00:21:56.658 } 00:21:56.658 } 00:21:56.658 ]' 00:21:56.658 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:56.917 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.917 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:56.917 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:56.917 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:56.917 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.917 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.917 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.175 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MzFkOGE5MzI5NGE0ZjRlZjgxNDYxMjg2YmIxNWQzM2EwNjZjY2ZhNjkwZmE3ZjI0MLzYDQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiZGM5ODNmYWRiMGQxNjg2MGIzZDRjYWM2MjI3NjBmOWU0YmM1ZGM0Yzg4ZTU4OTg0YTEwYWM2NmVlZTQ1M87WMIU=: 00:21:57.743 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.743 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:57.743 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.743 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.743 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.743 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:57.743 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:57.743 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:57.743 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:57.743 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:57.743 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:57.743 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:57.743 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:57.743 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.743 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.743 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.743 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.743 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.743 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.743 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.002 00:21:58.002 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:58.002 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:58.002 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.262 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.262 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.262 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.262 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.262 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.262 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:58.262 { 00:21:58.262 "cntlid": 123, 00:21:58.262 "qid": 0, 00:21:58.262 "state": "enabled", 00:21:58.262 "thread": "nvmf_tgt_poll_group_000", 00:21:58.262 "listen_address": { 00:21:58.262 "trtype": "TCP", 00:21:58.262 "adrfam": "IPv4", 00:21:58.262 "traddr": "10.0.0.2", 00:21:58.262 "trsvcid": "4420" 00:21:58.262 }, 00:21:58.262 "peer_address": { 00:21:58.262 "trtype": "TCP", 00:21:58.262 "adrfam": "IPv4", 00:21:58.262 "traddr": "10.0.0.1", 00:21:58.262 "trsvcid": "42482" 00:21:58.262 }, 00:21:58.262 "auth": { 00:21:58.262 "state": "completed", 00:21:58.262 "digest": "sha512", 00:21:58.262 "dhgroup": "ffdhe4096" 00:21:58.262 } 00:21:58.262 } 00:21:58.262 ]' 00:21:58.262 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:58.262 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.262 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:58.262 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:58.262 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:58.262 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.262 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.262 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.521 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MmYwMmZmMGRlODJhNmNhZWNhYTc5NzFlNWU2NmM5M2Ha7+5C: --dhchap-ctrl-secret DHHC-1:02:ZDA4MjM5MmY4N2MxNDU2YTRlOWQ2ODUzNmVhNTFjNzY2NTZhNTkwNGI0NWE4NWFlSXSZMQ==: 00:21:59.089 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.089 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:59.089 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.089 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.089 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.089 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:59.089 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:59.089 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:59.348 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:59.348 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:59.348 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:59.348 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:59.348 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:59.348 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.348 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.348 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.348 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.348 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.348 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.348 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.607 00:21:59.607 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:59.607 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:59.607 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.607 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.607 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.607 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.607 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.607 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.607 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:59.607 { 00:21:59.607 "cntlid": 125, 00:21:59.607 "qid": 0, 00:21:59.607 "state": "enabled", 00:21:59.607 "thread": "nvmf_tgt_poll_group_000", 00:21:59.607 "listen_address": { 00:21:59.607 "trtype": "TCP", 00:21:59.607 "adrfam": "IPv4", 00:21:59.607 "traddr": "10.0.0.2", 00:21:59.607 "trsvcid": "4420" 00:21:59.607 }, 00:21:59.607 "peer_address": { 00:21:59.607 "trtype": "TCP", 00:21:59.607 "adrfam": "IPv4", 00:21:59.607 "traddr": "10.0.0.1", 00:21:59.607 "trsvcid": "55100" 00:21:59.607 }, 00:21:59.607 "auth": { 00:21:59.607 "state": "completed", 00:21:59.607 "digest": "sha512", 00:21:59.607 "dhgroup": "ffdhe4096" 00:21:59.607 } 00:21:59.607 } 00:21:59.607 ]' 00:21:59.607 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:59.865 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.865 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:59.865 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:59.865 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:59.865 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.865 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.865 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.175 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTA2YjAxOTE3YzkxMzY0NTQ0ZGEzNTczMDNhNDQ5MTA0ZmMzMWUxOTA2NTA5ZWEyc3lZCg==: --dhchap-ctrl-secret DHHC-1:01:OWM4YWNkMTgxNmZiMGY4OTIwZDNkYWRlMmU3YzZmOTLzx0Fz: 00:22:00.435 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.435 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:00.435 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.435 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.435 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.435 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:00.435 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:00.435 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:00.694 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:22:00.694 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:00.694 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:00.694 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:00.694 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:00.694 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.694 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:22:00.694 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.694 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.694 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.694 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:00.694 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:00.954 00:22:00.954 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:00.954 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:00.954 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.213 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.213 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.213 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.214 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.214 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.214 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:01.214 { 00:22:01.214 "cntlid": 127, 00:22:01.214 "qid": 0, 00:22:01.214 "state": "enabled", 00:22:01.214 "thread": "nvmf_tgt_poll_group_000", 00:22:01.214 "listen_address": { 00:22:01.214 "trtype": "TCP", 00:22:01.214 "adrfam": "IPv4", 00:22:01.214 "traddr": "10.0.0.2", 00:22:01.214 "trsvcid": "4420" 00:22:01.214 }, 00:22:01.214 "peer_address": { 00:22:01.214 "trtype": "TCP", 00:22:01.214 "adrfam": "IPv4", 00:22:01.214 "traddr": "10.0.0.1", 00:22:01.214 "trsvcid": "55128" 00:22:01.214 }, 00:22:01.214 "auth": { 00:22:01.214 "state": "completed", 00:22:01.214 "digest": "sha512", 00:22:01.214 "dhgroup": "ffdhe4096" 00:22:01.214 } 00:22:01.214 } 00:22:01.214 ]' 00:22:01.214 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:01.214 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.214 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:01.214 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:01.214 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:01.214 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.214 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.214 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.473 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzA4M2ZlOWRmNWMxODNiZTIxNDQ4MGMxYWQ3OThmY2U5OWQ1ZjA2YzNlYzE2Zjc1Zjc4YzY1ZTgzNGM4NTU2M2WfuUc=: 00:22:02.041 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.041 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:02.041 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.041 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.041 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.041 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:02.041 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:02.041 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:02.041 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:02.300 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:22:02.300 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:02.300 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:02.300 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:02.300 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:02.300 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.300 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.300 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.300 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.300 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.300 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.300 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.558 00:22:02.558 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:02.558 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:02.558 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.818 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.818 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.818 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.818 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.818 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.818 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:02.818 { 00:22:02.818 "cntlid": 129, 00:22:02.818 "qid": 0, 00:22:02.818 "state": "enabled", 00:22:02.818 "thread": "nvmf_tgt_poll_group_000", 00:22:02.818 "listen_address": { 00:22:02.818 "trtype": "TCP", 00:22:02.818 "adrfam": "IPv4", 00:22:02.818 "traddr": "10.0.0.2", 00:22:02.818 "trsvcid": "4420" 00:22:02.818 }, 00:22:02.818 "peer_address": { 00:22:02.818 "trtype": "TCP", 00:22:02.818 "adrfam": "IPv4", 00:22:02.818 "traddr": "10.0.0.1", 00:22:02.818 "trsvcid": "55146" 00:22:02.818 }, 00:22:02.818 "auth": { 00:22:02.818 "state": "completed", 00:22:02.818 "digest": "sha512", 00:22:02.818 "dhgroup": "ffdhe6144" 00:22:02.818 } 00:22:02.818 } 00:22:02.818 ]' 00:22:02.818 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:02.818 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.818 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:02.818 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:02.818 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:02.818 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.818 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.818 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.077 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MzFkOGE5MzI5NGE0ZjRlZjgxNDYxMjg2YmIxNWQzM2EwNjZjY2ZhNjkwZmE3ZjI0MLzYDQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiZGM5ODNmYWRiMGQxNjg2MGIzZDRjYWM2MjI3NjBmOWU0YmM1ZGM0Yzg4ZTU4OTg0YTEwYWM2NmVlZTQ1M87WMIU=: 00:22:03.646 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.646 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:03.646 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.646 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.646 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.646 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:03.646 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:03.646 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:03.906 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:22:03.906 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:03.906 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:03.906 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:03.906 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:03.906 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.906 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.906 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.906 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.906 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.906 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.906 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.165 00:22:04.165 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:04.165 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:04.165 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.424 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.424 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.424 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.424 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.424 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.424 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:04.424 { 00:22:04.424 "cntlid": 131, 00:22:04.424 "qid": 0, 00:22:04.424 "state": "enabled", 00:22:04.424 "thread": "nvmf_tgt_poll_group_000", 00:22:04.424 "listen_address": { 00:22:04.424 "trtype": "TCP", 00:22:04.424 "adrfam": "IPv4", 00:22:04.424 "traddr": "10.0.0.2", 00:22:04.424 "trsvcid": "4420" 00:22:04.424 }, 00:22:04.424 "peer_address": { 00:22:04.424 "trtype": "TCP", 00:22:04.424 "adrfam": "IPv4", 00:22:04.424 "traddr": "10.0.0.1", 00:22:04.424 "trsvcid": "55162" 00:22:04.424 }, 00:22:04.424 "auth": { 00:22:04.424 "state": "completed", 00:22:04.424 "digest": "sha512", 00:22:04.424 "dhgroup": "ffdhe6144" 00:22:04.424 } 00:22:04.424 } 00:22:04.424 ]' 00:22:04.424 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:04.424 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.424 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:04.424 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:04.424 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:04.424 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.424 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.424 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.683 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MmYwMmZmMGRlODJhNmNhZWNhYTc5NzFlNWU2NmM5M2Ha7+5C: --dhchap-ctrl-secret DHHC-1:02:ZDA4MjM5MmY4N2MxNDU2YTRlOWQ2ODUzNmVhNTFjNzY2NTZhNTkwNGI0NWE4NWFlSXSZMQ==: 00:22:05.251 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.251 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:05.251 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.251 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.251 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.251 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:05.251 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:05.251 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:05.251 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:22:05.251 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:05.251 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:05.251 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:05.251 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:05.251 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.251 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.251 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.251 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.251 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.251 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.251 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.821 00:22:05.821 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:05.821 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:05.821 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.821 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.821 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.821 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.821 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.821 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.821 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:05.821 { 00:22:05.821 "cntlid": 133, 00:22:05.821 "qid": 0, 00:22:05.821 "state": "enabled", 00:22:05.821 "thread": "nvmf_tgt_poll_group_000", 00:22:05.821 "listen_address": { 00:22:05.821 "trtype": "TCP", 00:22:05.821 "adrfam": "IPv4", 00:22:05.821 "traddr": "10.0.0.2", 00:22:05.821 "trsvcid": "4420" 00:22:05.821 }, 00:22:05.821 "peer_address": { 00:22:05.821 "trtype": "TCP", 00:22:05.821 "adrfam": "IPv4", 00:22:05.821 "traddr": "10.0.0.1", 00:22:05.821 "trsvcid": "55192" 00:22:05.821 }, 00:22:05.821 "auth": { 00:22:05.821 "state": "completed", 00:22:05.821 "digest": "sha512", 00:22:05.821 "dhgroup": "ffdhe6144" 00:22:05.821 } 00:22:05.821 } 00:22:05.821 ]' 00:22:05.821 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:06.080 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.080 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:06.080 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:06.080 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:06.080 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.080 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.080 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.339 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTA2YjAxOTE3YzkxMzY0NTQ0ZGEzNTczMDNhNDQ5MTA0ZmMzMWUxOTA2NTA5ZWEyc3lZCg==: --dhchap-ctrl-secret DHHC-1:01:OWM4YWNkMTgxNmZiMGY4OTIwZDNkYWRlMmU3YzZmOTLzx0Fz: 00:22:06.908 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.908 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:06.908 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.908 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.908 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.908 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:06.908 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:06.908 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:06.908 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:22:06.908 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:06.908 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:06.908 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:06.908 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:06.908 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.908 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:22:06.908 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.908 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.908 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.908 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:06.908 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.166 00:22:07.166 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:07.166 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.166 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:07.425 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.425 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.425 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.425 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.425 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.425 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:07.425 { 00:22:07.425 "cntlid": 135, 00:22:07.425 "qid": 0, 00:22:07.425 "state": "enabled", 00:22:07.425 "thread": "nvmf_tgt_poll_group_000", 00:22:07.425 "listen_address": { 00:22:07.425 "trtype": "TCP", 00:22:07.425 "adrfam": "IPv4", 00:22:07.425 "traddr": "10.0.0.2", 00:22:07.425 "trsvcid": "4420" 00:22:07.425 }, 00:22:07.425 "peer_address": { 00:22:07.425 "trtype": "TCP", 00:22:07.425 "adrfam": "IPv4", 00:22:07.425 "traddr": "10.0.0.1", 00:22:07.425 "trsvcid": "55208" 00:22:07.425 }, 00:22:07.425 "auth": { 00:22:07.425 "state": "completed", 00:22:07.425 "digest": "sha512", 00:22:07.425 "dhgroup": "ffdhe6144" 00:22:07.425 } 00:22:07.425 } 00:22:07.425 ]' 00:22:07.425 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:07.425 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.425 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:07.684 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:07.684 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:07.684 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.684 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.684 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.685 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzA4M2ZlOWRmNWMxODNiZTIxNDQ4MGMxYWQ3OThmY2U5OWQ1ZjA2YzNlYzE2Zjc1Zjc4YzY1ZTgzNGM4NTU2M2WfuUc=: 00:22:08.253 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.253 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:08.253 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.253 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.253 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.253 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:08.253 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:08.253 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:08.253 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:08.513 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:22:08.513 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:08.513 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:08.513 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:08.513 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:08.513 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.513 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.513 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.513 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.513 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.513 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.513 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.081 00:22:09.081 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:09.081 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.081 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:09.081 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.081 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.081 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.081 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.081 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.081 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:09.081 { 00:22:09.081 "cntlid": 137, 00:22:09.081 "qid": 0, 00:22:09.081 "state": "enabled", 00:22:09.081 "thread": "nvmf_tgt_poll_group_000", 00:22:09.081 "listen_address": { 00:22:09.081 "trtype": "TCP", 00:22:09.081 "adrfam": "IPv4", 00:22:09.081 "traddr": "10.0.0.2", 00:22:09.081 "trsvcid": "4420" 00:22:09.081 }, 00:22:09.081 "peer_address": { 00:22:09.081 "trtype": "TCP", 00:22:09.081 "adrfam": "IPv4", 00:22:09.081 "traddr": "10.0.0.1", 00:22:09.081 "trsvcid": "55230" 00:22:09.081 }, 00:22:09.081 "auth": { 00:22:09.081 "state": "completed", 00:22:09.081 "digest": "sha512", 00:22:09.081 "dhgroup": "ffdhe8192" 00:22:09.081 } 00:22:09.081 } 00:22:09.081 ]' 00:22:09.081 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:09.339 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:09.339 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:09.339 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:09.339 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:09.339 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.339 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.339 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.598 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MzFkOGE5MzI5NGE0ZjRlZjgxNDYxMjg2YmIxNWQzM2EwNjZjY2ZhNjkwZmE3ZjI0MLzYDQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiZGM5ODNmYWRiMGQxNjg2MGIzZDRjYWM2MjI3NjBmOWU0YmM1ZGM0Yzg4ZTU4OTg0YTEwYWM2NmVlZTQ1M87WMIU=: 00:22:10.165 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.165 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:10.165 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.165 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.165 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.165 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:10.165 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:10.165 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:10.165 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:22:10.165 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:10.165 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:10.165 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:10.165 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:10.165 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.165 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.165 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.165 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.165 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.165 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.165 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.734 00:22:10.734 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:10.734 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:10.735 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.048 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.048 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.048 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.048 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.048 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.048 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:11.048 { 00:22:11.048 "cntlid": 139, 00:22:11.048 "qid": 0, 00:22:11.048 "state": "enabled", 00:22:11.048 "thread": "nvmf_tgt_poll_group_000", 00:22:11.048 "listen_address": { 00:22:11.048 "trtype": "TCP", 00:22:11.049 "adrfam": "IPv4", 00:22:11.049 "traddr": "10.0.0.2", 00:22:11.049 "trsvcid": "4420" 00:22:11.049 }, 00:22:11.049 "peer_address": { 00:22:11.049 "trtype": "TCP", 00:22:11.049 "adrfam": "IPv4", 00:22:11.049 "traddr": "10.0.0.1", 00:22:11.049 "trsvcid": "41454" 00:22:11.049 }, 00:22:11.049 "auth": { 00:22:11.049 "state": "completed", 00:22:11.049 "digest": "sha512", 00:22:11.049 "dhgroup": "ffdhe8192" 00:22:11.049 } 00:22:11.049 } 00:22:11.049 ]' 00:22:11.049 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:11.049 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.049 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:11.049 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:11.049 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:11.049 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.049 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.049 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.308 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MmYwMmZmMGRlODJhNmNhZWNhYTc5NzFlNWU2NmM5M2Ha7+5C: --dhchap-ctrl-secret DHHC-1:02:ZDA4MjM5MmY4N2MxNDU2YTRlOWQ2ODUzNmVhNTFjNzY2NTZhNTkwNGI0NWE4NWFlSXSZMQ==: 00:22:11.876 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.876 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:11.876 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.877 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.877 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.877 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:11.877 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:11.877 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:11.877 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:22:11.877 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:11.877 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:11.877 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:11.877 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:11.877 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.877 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.877 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.877 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.877 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.877 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.877 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.446 00:22:12.446 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:12.446 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:12.446 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.446 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.446 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.446 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.446 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.446 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.446 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:12.446 { 00:22:12.446 "cntlid": 141, 00:22:12.446 "qid": 0, 00:22:12.446 "state": "enabled", 00:22:12.446 "thread": "nvmf_tgt_poll_group_000", 00:22:12.446 "listen_address": { 00:22:12.446 "trtype": "TCP", 00:22:12.446 "adrfam": "IPv4", 00:22:12.446 "traddr": "10.0.0.2", 00:22:12.446 "trsvcid": "4420" 00:22:12.446 }, 00:22:12.446 "peer_address": { 00:22:12.446 "trtype": "TCP", 00:22:12.446 "adrfam": "IPv4", 00:22:12.446 "traddr": "10.0.0.1", 00:22:12.446 "trsvcid": "41476" 00:22:12.446 }, 00:22:12.446 "auth": { 00:22:12.446 "state": "completed", 00:22:12.446 "digest": "sha512", 00:22:12.446 "dhgroup": "ffdhe8192" 00:22:12.446 } 00:22:12.446 } 00:22:12.446 ]' 00:22:12.446 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:12.705 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.705 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:12.705 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:12.705 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:12.705 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.705 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.705 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.964 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTA2YjAxOTE3YzkxMzY0NTQ0ZGEzNTczMDNhNDQ5MTA0ZmMzMWUxOTA2NTA5ZWEyc3lZCg==: --dhchap-ctrl-secret DHHC-1:01:OWM4YWNkMTgxNmZiMGY4OTIwZDNkYWRlMmU3YzZmOTLzx0Fz: 00:22:13.533 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.533 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:13.533 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.533 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.533 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.533 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:13.533 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:13.533 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:13.533 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:22:13.533 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:13.533 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:13.533 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:13.533 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:13.533 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.533 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:22:13.533 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.533 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.533 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.533 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:13.533 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:14.102 00:22:14.102 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:14.102 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:14.102 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.362 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.362 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.362 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.362 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.362 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.362 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:14.362 { 00:22:14.362 "cntlid": 143, 00:22:14.362 "qid": 0, 00:22:14.362 "state": "enabled", 00:22:14.362 "thread": "nvmf_tgt_poll_group_000", 00:22:14.362 "listen_address": { 00:22:14.362 "trtype": "TCP", 00:22:14.362 "adrfam": "IPv4", 00:22:14.362 "traddr": "10.0.0.2", 00:22:14.362 "trsvcid": "4420" 00:22:14.362 }, 00:22:14.362 "peer_address": { 00:22:14.362 "trtype": "TCP", 00:22:14.362 "adrfam": "IPv4", 00:22:14.362 "traddr": "10.0.0.1", 00:22:14.362 "trsvcid": "41510" 00:22:14.362 }, 00:22:14.362 "auth": { 00:22:14.362 "state": "completed", 00:22:14.362 "digest": "sha512", 00:22:14.362 "dhgroup": "ffdhe8192" 00:22:14.362 } 00:22:14.362 } 00:22:14.362 ]' 00:22:14.362 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:14.362 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:14.362 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:14.362 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:14.362 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:14.362 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.362 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.362 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.621 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzA4M2ZlOWRmNWMxODNiZTIxNDQ4MGMxYWQ3OThmY2U5OWQ1ZjA2YzNlYzE2Zjc1Zjc4YzY1ZTgzNGM4NTU2M2WfuUc=: 00:22:15.189 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.190 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:15.190 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.190 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.190 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.190 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:15.190 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:15.190 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:15.190 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:15.190 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:15.190 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:15.190 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:15.190 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:15.190 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:15.190 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:15.190 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:15.190 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.449 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.449 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.449 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.449 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.449 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.449 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.709 00:22:15.709 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:15.709 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:15.709 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.969 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.969 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.969 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.969 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.969 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.969 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:15.969 { 00:22:15.969 "cntlid": 145, 00:22:15.969 "qid": 0, 00:22:15.969 "state": "enabled", 00:22:15.969 "thread": "nvmf_tgt_poll_group_000", 00:22:15.969 "listen_address": { 00:22:15.969 "trtype": "TCP", 00:22:15.969 "adrfam": "IPv4", 00:22:15.969 "traddr": "10.0.0.2", 00:22:15.969 "trsvcid": "4420" 00:22:15.969 }, 00:22:15.969 "peer_address": { 00:22:15.969 "trtype": "TCP", 00:22:15.969 "adrfam": "IPv4", 00:22:15.969 "traddr": "10.0.0.1", 00:22:15.969 "trsvcid": "41530" 00:22:15.969 }, 00:22:15.969 "auth": { 00:22:15.969 "state": "completed", 00:22:15.969 "digest": "sha512", 00:22:15.969 "dhgroup": "ffdhe8192" 00:22:15.969 } 00:22:15.969 } 00:22:15.969 ]' 00:22:15.969 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:15.969 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.969 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:15.969 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:15.969 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:16.228 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.228 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.228 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.228 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MzFkOGE5MzI5NGE0ZjRlZjgxNDYxMjg2YmIxNWQzM2EwNjZjY2ZhNjkwZmE3ZjI0MLzYDQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiZGM5ODNmYWRiMGQxNjg2MGIzZDRjYWM2MjI3NjBmOWU0YmM1ZGM0Yzg4ZTU4OTg0YTEwYWM2NmVlZTQ1M87WMIU=: 00:22:16.797 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.797 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:16.797 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.797 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.797 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.797 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:22:16.797 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.797 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.797 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.797 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:16.797 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:16.797 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:16.797 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:16.797 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:16.797 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:16.797 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:16.797 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:16.797 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:17.365 request: 00:22:17.365 { 00:22:17.365 "name": "nvme0", 00:22:17.365 "trtype": "tcp", 00:22:17.365 "traddr": "10.0.0.2", 00:22:17.365 "adrfam": "ipv4", 00:22:17.365 "trsvcid": "4420", 00:22:17.365 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:17.365 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:22:17.365 "prchk_reftag": false, 00:22:17.365 "prchk_guard": false, 00:22:17.365 "hdgst": false, 00:22:17.365 "ddgst": false, 00:22:17.365 "dhchap_key": "key2", 00:22:17.365 "method": "bdev_nvme_attach_controller", 00:22:17.365 "req_id": 1 00:22:17.365 } 00:22:17.365 Got JSON-RPC error response 00:22:17.365 response: 00:22:17.365 { 00:22:17.365 "code": -5, 00:22:17.365 "message": "Input/output error" 00:22:17.365 } 00:22:17.365 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:17.365 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:17.365 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:17.365 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:17.365 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:17.365 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.365 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.365 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.365 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.365 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.365 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.365 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.365 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:17.365 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:17.365 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:17.365 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:17.365 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:17.365 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:17.365 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:17.365 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:17.365 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:17.624 request: 00:22:17.624 { 00:22:17.624 "name": "nvme0", 00:22:17.624 "trtype": "tcp", 00:22:17.624 "traddr": "10.0.0.2", 00:22:17.624 "adrfam": "ipv4", 00:22:17.624 "trsvcid": "4420", 00:22:17.624 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:17.624 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:22:17.624 "prchk_reftag": false, 00:22:17.624 "prchk_guard": false, 00:22:17.624 "hdgst": false, 00:22:17.624 "ddgst": false, 00:22:17.624 "dhchap_key": "key1", 00:22:17.624 "dhchap_ctrlr_key": "ckey2", 00:22:17.624 "method": "bdev_nvme_attach_controller", 00:22:17.624 "req_id": 1 00:22:17.624 } 00:22:17.624 Got JSON-RPC error response 00:22:17.624 response: 00:22:17.625 { 00:22:17.625 "code": -5, 00:22:17.625 "message": "Input/output error" 00:22:17.625 } 00:22:17.625 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:17.625 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:17.625 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:17.625 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:17.625 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:17.625 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.625 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.625 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.625 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:22:17.625 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.625 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.625 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.625 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.625 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:17.625 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.625 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:17.625 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:17.625 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:17.625 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:17.625 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.625 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.193 request: 00:22:18.193 { 00:22:18.193 "name": "nvme0", 00:22:18.193 "trtype": "tcp", 00:22:18.193 "traddr": "10.0.0.2", 00:22:18.193 "adrfam": "ipv4", 00:22:18.193 "trsvcid": "4420", 00:22:18.193 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:18.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:22:18.193 "prchk_reftag": false, 00:22:18.193 "prchk_guard": false, 00:22:18.193 "hdgst": false, 00:22:18.193 "ddgst": false, 00:22:18.193 "dhchap_key": "key1", 00:22:18.193 "dhchap_ctrlr_key": "ckey1", 00:22:18.193 "method": "bdev_nvme_attach_controller", 00:22:18.193 "req_id": 1 00:22:18.193 } 00:22:18.193 Got JSON-RPC error response 00:22:18.193 response: 00:22:18.193 { 00:22:18.193 "code": -5, 00:22:18.193 "message": "Input/output error" 00:22:18.193 } 00:22:18.193 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:18.193 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:18.193 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:18.193 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:18.193 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:18.193 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.193 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.193 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.193 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 295304 00:22:18.193 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 295304 ']' 00:22:18.193 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 295304 00:22:18.193 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:18.193 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:18.193 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 295304 00:22:18.193 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:18.193 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:18.193 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 295304' 00:22:18.193 killing process with pid 295304 00:22:18.193 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 295304 00:22:18.193 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 295304 00:22:18.452 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:18.452 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:18.452 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:18.452 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.452 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=315686 00:22:18.452 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:18.452 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 315686 00:22:18.452 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 315686 ']' 00:22:18.452 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.452 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:18.452 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.452 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:18.452 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.388 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:19.388 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:19.388 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:19.388 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:19.388 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.388 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.388 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:19.388 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 315686 00:22:19.388 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 315686 ']' 00:22:19.388 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.388 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:19.388 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.388 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:19.388 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.388 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:19.388 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:19.388 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:19.388 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.388 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.647 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.647 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:19.647 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:19.647 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:19.647 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:19.647 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:19.647 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.647 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:22:19.647 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.647 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.647 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.647 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:19.647 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:19.905 00:22:19.905 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:19.905 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.905 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:20.164 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.164 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.164 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.164 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.165 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.165 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:20.165 { 00:22:20.165 "cntlid": 1, 00:22:20.165 "qid": 0, 00:22:20.165 "state": "enabled", 00:22:20.165 "thread": "nvmf_tgt_poll_group_000", 00:22:20.165 "listen_address": { 00:22:20.165 "trtype": "TCP", 00:22:20.165 "adrfam": "IPv4", 00:22:20.165 "traddr": "10.0.0.2", 00:22:20.165 "trsvcid": "4420" 00:22:20.165 }, 00:22:20.165 "peer_address": { 00:22:20.165 "trtype": "TCP", 00:22:20.165 "adrfam": "IPv4", 00:22:20.165 "traddr": "10.0.0.1", 00:22:20.165 "trsvcid": "60770" 00:22:20.165 }, 00:22:20.165 "auth": { 00:22:20.165 "state": "completed", 00:22:20.165 "digest": "sha512", 00:22:20.165 "dhgroup": "ffdhe8192" 00:22:20.165 } 00:22:20.165 } 00:22:20.165 ]' 00:22:20.165 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:20.165 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:20.165 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:20.165 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:20.165 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:20.424 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.424 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.424 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.424 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MzA4M2ZlOWRmNWMxODNiZTIxNDQ4MGMxYWQ3OThmY2U5OWQ1ZjA2YzNlYzE2Zjc1Zjc4YzY1ZTgzNGM4NTU2M2WfuUc=: 00:22:20.989 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.989 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:20.989 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.989 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.989 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.989 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:22:20.989 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.989 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.989 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.989 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:20.989 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:21.248 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:21.248 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:21.248 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:21.248 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:21.248 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.248 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:21.248 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.248 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:21.248 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:21.507 request: 00:22:21.507 { 00:22:21.507 "name": "nvme0", 00:22:21.507 "trtype": "tcp", 00:22:21.507 "traddr": "10.0.0.2", 00:22:21.507 "adrfam": "ipv4", 00:22:21.507 "trsvcid": "4420", 00:22:21.507 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:21.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:22:21.507 "prchk_reftag": false, 00:22:21.507 "prchk_guard": false, 00:22:21.507 "hdgst": false, 00:22:21.507 "ddgst": false, 00:22:21.507 "dhchap_key": "key3", 00:22:21.507 "method": "bdev_nvme_attach_controller", 00:22:21.507 "req_id": 1 00:22:21.507 } 00:22:21.507 Got JSON-RPC error response 00:22:21.507 response: 00:22:21.507 { 00:22:21.507 "code": -5, 00:22:21.507 "message": "Input/output error" 00:22:21.507 } 00:22:21.507 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:21.507 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:21.507 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:21.507 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:21.507 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:21.507 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:21.507 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:21.507 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:21.507 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:21.507 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:21.507 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:21.507 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:21.507 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.507 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:21.507 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.507 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:21.507 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:21.766 request: 00:22:21.766 { 00:22:21.766 "name": "nvme0", 00:22:21.766 "trtype": "tcp", 00:22:21.766 "traddr": "10.0.0.2", 00:22:21.766 "adrfam": "ipv4", 00:22:21.766 "trsvcid": "4420", 00:22:21.766 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:21.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:22:21.766 "prchk_reftag": false, 00:22:21.766 "prchk_guard": false, 00:22:21.766 "hdgst": false, 00:22:21.766 "ddgst": false, 00:22:21.766 "dhchap_key": "key3", 00:22:21.766 "method": "bdev_nvme_attach_controller", 00:22:21.766 "req_id": 1 00:22:21.766 } 00:22:21.766 Got JSON-RPC error response 00:22:21.766 response: 00:22:21.766 { 00:22:21.766 "code": -5, 00:22:21.766 "message": "Input/output error" 00:22:21.766 } 00:22:21.766 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:21.766 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:21.766 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:21.766 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:21.766 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:21.766 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:21.766 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:21.766 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:21.766 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:21.766 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:22.025 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:22.025 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.025 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.025 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.025 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:22.025 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.025 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.025 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.025 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:22.025 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:22.025 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:22.025 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:22.025 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:22.025 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:22.025 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:22.025 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:22.025 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:22.025 request: 00:22:22.025 { 00:22:22.025 "name": "nvme0", 00:22:22.025 "trtype": "tcp", 00:22:22.025 "traddr": "10.0.0.2", 00:22:22.025 "adrfam": "ipv4", 00:22:22.025 "trsvcid": "4420", 00:22:22.025 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:22.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:22:22.025 "prchk_reftag": false, 00:22:22.025 "prchk_guard": false, 00:22:22.025 "hdgst": false, 00:22:22.025 "ddgst": false, 00:22:22.025 "dhchap_key": "key0", 00:22:22.025 "dhchap_ctrlr_key": "key1", 00:22:22.025 "method": "bdev_nvme_attach_controller", 00:22:22.025 "req_id": 1 00:22:22.025 } 00:22:22.025 Got JSON-RPC error response 00:22:22.025 response: 00:22:22.025 { 00:22:22.025 "code": -5, 00:22:22.025 "message": "Input/output error" 00:22:22.025 } 00:22:22.025 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:22.025 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:22.025 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:22.026 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:22.026 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:22.026 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:22.284 00:22:22.284 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:22.284 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:22.284 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.543 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.543 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.543 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.802 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:22.802 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:22.802 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 295328 00:22:22.802 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 295328 ']' 00:22:22.802 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 295328 00:22:22.802 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:22.802 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:22.802 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 295328 00:22:22.802 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:22.802 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:22.802 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 295328' 00:22:22.802 killing process with pid 295328 00:22:22.802 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 295328 00:22:22.802 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 295328 00:22:23.061 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:23.061 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:23.061 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:23.061 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:23.061 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:23.061 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:23.061 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:23.061 rmmod nvme_tcp 00:22:23.061 rmmod nvme_fabrics 00:22:23.061 rmmod nvme_keyring 00:22:23.061 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:23.061 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:23.061 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:23.061 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 315686 ']' 00:22:23.061 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 315686 00:22:23.061 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 315686 ']' 00:22:23.061 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 315686 00:22:23.061 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:23.061 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:23.062 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 315686 00:22:23.320 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:23.321 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:23.321 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 315686' 00:22:23.321 killing process with pid 315686 00:22:23.321 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 315686 00:22:23.321 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 315686 00:22:23.321 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:23.321 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:23.321 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:23.321 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:23.321 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:23.321 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.321 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.321 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.xRn /tmp/spdk.key-sha256.s0b /tmp/spdk.key-sha384.E2V /tmp/spdk.key-sha512.ghv /tmp/spdk.key-sha512.0dK /tmp/spdk.key-sha384.uQU /tmp/spdk.key-sha256.QnR '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:25.858 00:22:25.858 real 2m9.213s 00:22:25.858 user 4m49.099s 00:22:25.858 sys 0m28.560s 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.858 ************************************ 00:22:25.858 END TEST nvmf_auth_target 00:22:25.858 ************************************ 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:25.858 ************************************ 00:22:25.858 START TEST nvmf_bdevio_no_huge 00:22:25.858 ************************************ 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:25.858 * Looking for test storage... 00:22:25.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:25.858 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:25.859 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:32.506 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:32.506 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:32.506 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:32.506 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:32.506 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:32.506 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:32.506 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:32.506 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:32.507 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:32.507 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:32.507 Found net devices under 0000:af:00.0: cvl_0_0 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:32.507 Found net devices under 0000:af:00.1: cvl_0_1 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:32.507 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:32.507 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:32.507 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:32.507 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:32.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:22:32.507 00:22:32.507 --- 10.0.0.2 ping statistics --- 00:22:32.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.507 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:22:32.507 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:32.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:22:32.507 00:22:32.507 --- 10.0.0.1 ping statistics --- 00:22:32.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.507 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:22:32.507 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.507 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:32.507 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:32.507 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.507 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:32.507 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:32.507 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.507 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:32.508 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:32.508 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:32.508 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:32.508 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:32.508 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:32.508 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=320093 00:22:32.508 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:32.508 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 320093 00:22:32.508 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 320093 ']' 00:22:32.508 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.508 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:32.508 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.508 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:32.508 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:32.508 [2024-07-25 13:50:29.147671] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:32.508 [2024-07-25 13:50:29.147733] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:32.508 [2024-07-25 13:50:29.199123] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:32.508 [2024-07-25 13:50:29.226119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:32.508 [2024-07-25 13:50:29.306557] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.508 [2024-07-25 13:50:29.306595] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.508 [2024-07-25 13:50:29.306604] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.508 [2024-07-25 13:50:29.306612] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.508 [2024-07-25 13:50:29.306619] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.508 [2024-07-25 13:50:29.306762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:32.508 [2024-07-25 13:50:29.306873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:22:32.508 [2024-07-25 13:50:29.306984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:22:32.508 [2024-07-25 13:50:29.306990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:33.077 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:33.077 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:22:33.077 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:33.077 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:33.077 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.336 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.336 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:33.336 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.336 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.336 [2024-07-25 13:50:30.004303] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.336 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.336 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:33.336 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.336 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.336 Malloc0 00:22:33.336 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.336 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:33.336 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.336 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.336 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.336 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:33.336 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.336 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.336 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.336 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:33.336 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.336 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.336 [2024-07-25 13:50:30.048944] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.336 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.336 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:33.336 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:33.336 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:33.336 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:33.336 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:33.336 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:33.336 { 00:22:33.336 "params": { 00:22:33.337 "name": "Nvme$subsystem", 00:22:33.337 "trtype": "$TEST_TRANSPORT", 00:22:33.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.337 "adrfam": "ipv4", 00:22:33.337 "trsvcid": "$NVMF_PORT", 00:22:33.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.337 "hdgst": ${hdgst:-false}, 00:22:33.337 "ddgst": ${ddgst:-false} 00:22:33.337 }, 00:22:33.337 "method": "bdev_nvme_attach_controller" 00:22:33.337 } 00:22:33.337 EOF 00:22:33.337 )") 00:22:33.337 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:33.337 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:33.337 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:33.337 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:33.337 "params": { 00:22:33.337 "name": "Nvme1", 00:22:33.337 "trtype": "tcp", 00:22:33.337 "traddr": "10.0.0.2", 00:22:33.337 "adrfam": "ipv4", 00:22:33.337 "trsvcid": "4420", 00:22:33.337 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.337 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:33.337 "hdgst": false, 00:22:33.337 "ddgst": false 00:22:33.337 }, 00:22:33.337 "method": "bdev_nvme_attach_controller" 00:22:33.337 }' 00:22:33.337 [2024-07-25 13:50:30.102946] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:33.337 [2024-07-25 13:50:30.103003] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid320363 ] 00:22:33.337 [2024-07-25 13:50:30.150567] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:33.337 [2024-07-25 13:50:30.177912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:33.596 [2024-07-25 13:50:30.259384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.596 [2024-07-25 13:50:30.259476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.596 [2024-07-25 13:50:30.259478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.855 I/O targets: 00:22:33.856 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:33.856 00:22:33.856 00:22:33.856 CUnit - A unit testing framework for C - Version 2.1-3 00:22:33.856 http://cunit.sourceforge.net/ 00:22:33.856 00:22:33.856 00:22:33.856 Suite: bdevio tests on: Nvme1n1 00:22:33.856 Test: blockdev write read block ...passed 00:22:33.856 Test: blockdev write zeroes read block ...passed 00:22:33.856 Test: blockdev write zeroes read no split ...passed 00:22:33.856 Test: blockdev write zeroes read split ...passed 00:22:34.115 Test: blockdev write zeroes read split partial ...passed 00:22:34.115 Test: blockdev reset ...[2024-07-25 13:50:30.792017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:34.115 [2024-07-25 13:50:30.792078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a1960 (9): Bad file descriptor 00:22:34.115 [2024-07-25 13:50:30.807216] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:34.115 passed 00:22:34.115 Test: blockdev write read 8 blocks ...passed 00:22:34.115 Test: blockdev write read size > 128k ...passed 00:22:34.115 Test: blockdev write read invalid size ...passed 00:22:34.115 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:34.115 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:34.115 Test: blockdev write read max offset ...passed 00:22:34.115 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:34.115 Test: blockdev writev readv 8 blocks ...passed 00:22:34.115 Test: blockdev writev readv 30 x 1block ...passed 00:22:34.375 Test: blockdev writev readv block ...passed 00:22:34.375 Test: blockdev writev readv size > 128k ...passed 00:22:34.375 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:34.375 Test: blockdev comparev and writev ...[2024-07-25 13:50:31.023646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.375 [2024-07-25 13:50:31.023678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.375 [2024-07-25 13:50:31.023695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.375 [2024-07-25 13:50:31.023706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:34.375 [2024-07-25 13:50:31.024044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.375 [2024-07-25 13:50:31.024057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:34.375 [2024-07-25 13:50:31.024071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.375 [2024-07-25 13:50:31.024080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:34.375 [2024-07-25 13:50:31.024407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.375 [2024-07-25 13:50:31.024420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:34.375 [2024-07-25 13:50:31.024434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.375 [2024-07-25 13:50:31.024449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:34.375 [2024-07-25 13:50:31.024772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.375 [2024-07-25 13:50:31.024786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:34.375 [2024-07-25 13:50:31.024800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.375 [2024-07-25 13:50:31.024810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:34.375 passed 00:22:34.375 Test: blockdev nvme passthru rw ...passed 00:22:34.375 Test: blockdev nvme passthru vendor specific ...[2024-07-25 13:50:31.107229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:34.375 [2024-07-25 13:50:31.107246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:34.375 [2024-07-25 13:50:31.107453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:34.375 [2024-07-25 13:50:31.107465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:34.375 [2024-07-25 13:50:31.107661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:34.375 [2024-07-25 13:50:31.107672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:34.375 [2024-07-25 13:50:31.107877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:34.375 [2024-07-25 13:50:31.107889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:34.375 passed 00:22:34.375 Test: blockdev nvme admin passthru ...passed 00:22:34.375 Test: blockdev copy ...passed 00:22:34.375 00:22:34.375 Run Summary: Type Total Ran Passed Failed Inactive 00:22:34.375 suites 1 1 n/a 0 0 00:22:34.375 tests 23 23 23 0 0 00:22:34.375 asserts 152 152 152 0 n/a 00:22:34.375 00:22:34.375 Elapsed time = 1.197 seconds 00:22:34.635 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:34.635 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.635 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:34.635 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.635 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:34.635 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:34.635 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:34.635 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:34.635 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:34.635 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:34.635 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:34.635 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:34.635 rmmod nvme_tcp 00:22:34.635 rmmod nvme_fabrics 00:22:34.635 rmmod nvme_keyring 00:22:34.895 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:34.895 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:34.895 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:34.895 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 320093 ']' 00:22:34.895 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 320093 00:22:34.895 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 320093 ']' 00:22:34.895 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 320093 00:22:34.895 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:22:34.895 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:34.895 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 320093 00:22:34.895 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:22:34.895 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:22:34.895 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 320093' 00:22:34.895 killing process with pid 320093 00:22:34.895 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 320093 00:22:34.895 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 320093 00:22:35.155 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:35.155 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:35.155 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:35.155 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:35.155 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:35.155 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.155 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.155 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.691 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:37.691 00:22:37.691 real 0m11.744s 00:22:37.691 user 0m14.687s 00:22:37.691 sys 0m6.123s 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:37.692 ************************************ 00:22:37.692 END TEST nvmf_bdevio_no_huge 00:22:37.692 ************************************ 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:37.692 ************************************ 00:22:37.692 START TEST nvmf_tls 00:22:37.692 ************************************ 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:37.692 * Looking for test storage... 00:22:37.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:37.692 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:44.314 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.314 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:44.315 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:44.315 Found net devices under 0000:af:00.0: cvl_0_0 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:44.315 Found net devices under 0000:af:00.1: cvl_0_1 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:44.315 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.315 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:44.315 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:44.315 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:44.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:22:44.315 00:22:44.315 --- 10.0.0.2 ping statistics --- 00:22:44.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.315 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:22:44.315 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:44.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:22:44.315 00:22:44.315 --- 10.0.0.1 ping statistics --- 00:22:44.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.315 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:22:44.315 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.315 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:44.315 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:44.315 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.315 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:44.315 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:44.315 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.315 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:44.315 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:44.315 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:44.315 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:44.315 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:44.315 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.315 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=324308 00:22:44.315 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 324308 00:22:44.315 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:44.315 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 324308 ']' 00:22:44.315 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.315 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:44.315 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.315 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:44.315 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.315 [2024-07-25 13:50:41.164221] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:44.315 [2024-07-25 13:50:41.164267] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.315 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.575 [2024-07-25 13:50:41.205983] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:44.575 [2024-07-25 13:50:41.239424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.575 [2024-07-25 13:50:41.277616] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.575 [2024-07-25 13:50:41.277656] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.575 [2024-07-25 13:50:41.277669] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.575 [2024-07-25 13:50:41.277677] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.575 [2024-07-25 13:50:41.277684] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.575 [2024-07-25 13:50:41.277704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.142 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:45.142 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:45.142 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:45.142 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:45.142 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.142 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.142 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:45.142 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:45.400 true 00:22:45.400 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:45.400 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:45.659 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:45.659 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:45.659 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:45.659 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:45.659 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:45.917 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:45.917 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:45.918 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:46.177 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:46.177 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:46.177 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:46.177 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:46.177 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:46.177 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:46.436 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:46.436 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:46.436 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:46.706 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:46.706 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:46.706 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:46.706 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:46.706 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:46.963 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:46.963 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:47.222 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:47.222 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:47.222 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:47.222 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:47.222 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:47.222 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:47.222 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:47.222 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:47.222 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:47.222 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:47.222 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:47.222 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:47.222 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:47.222 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:47.222 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:47.222 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:47.222 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:47.222 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:47.222 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:47.222 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.BSvDb5G76M 00:22:47.222 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:47.222 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.ip27oFTH8Y 00:22:47.222 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:47.222 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:47.222 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.BSvDb5G76M 00:22:47.222 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ip27oFTH8Y 00:22:47.222 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:47.482 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:47.741 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.BSvDb5G76M 00:22:47.741 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.BSvDb5G76M 00:22:47.741 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:47.741 [2024-07-25 13:50:44.533273] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:47.741 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:47.999 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:47.999 [2024-07-25 13:50:44.854082] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:47.999 [2024-07-25 13:50:44.854289] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:47.999 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:48.257 malloc0 00:22:48.257 13:50:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:48.515 13:50:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BSvDb5G76M 00:22:48.515 [2024-07-25 13:50:45.355630] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:48.515 13:50:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.BSvDb5G76M 00:22:48.515 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.766 Initializing NVMe Controllers 00:23:00.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:00.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:00.766 Initialization complete. Launching workers. 00:23:00.766 ======================================================== 00:23:00.766 Latency(us) 00:23:00.766 Device Information : IOPS MiB/s Average min max 00:23:00.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16480.79 64.38 3883.76 779.20 5401.00 00:23:00.766 ======================================================== 00:23:00.766 Total : 16480.79 64.38 3883.76 779.20 5401.00 00:23:00.766 00:23:00.766 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BSvDb5G76M 00:23:00.766 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:00.766 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:00.766 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:00.766 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.BSvDb5G76M' 00:23:00.766 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:00.766 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=326732 00:23:00.766 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:00.766 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:00.766 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 326732 /var/tmp/bdevperf.sock 00:23:00.766 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 326732 ']' 00:23:00.766 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:00.766 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:00.766 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:00.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:00.766 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:00.766 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.766 [2024-07-25 13:50:55.526923] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:23:00.766 [2024-07-25 13:50:55.526981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid326732 ] 00:23:00.766 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.766 [2024-07-25 13:50:55.563842] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:00.766 [2024-07-25 13:50:55.595219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.766 [2024-07-25 13:50:55.633064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.766 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:00.766 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:00.766 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BSvDb5G76M 00:23:00.766 [2024-07-25 13:50:55.868258] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:00.766 [2024-07-25 13:50:55.868348] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:00.766 TLSTESTn1 00:23:00.766 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:00.766 Running I/O for 10 seconds... 00:23:10.741 00:23:10.741 Latency(us) 00:23:10.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.741 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:10.741 Verification LBA range: start 0x0 length 0x2000 00:23:10.741 TLSTESTn1 : 10.03 4647.26 18.15 0.00 0.00 27490.57 4666.16 76336.33 00:23:10.741 =================================================================================================================== 00:23:10.741 Total : 4647.26 18.15 0.00 0.00 27490.57 4666.16 76336.33 00:23:10.741 0 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 326732 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 326732 ']' 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 326732 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 326732 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 326732' 00:23:10.741 killing process with pid 326732 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 326732 00:23:10.741 Received shutdown signal, test time was about 10.000000 seconds 00:23:10.741 00:23:10.741 Latency(us) 00:23:10.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.741 =================================================================================================================== 00:23:10.741 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:10.741 [2024-07-25 13:51:06.155500] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 326732 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ip27oFTH8Y 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ip27oFTH8Y 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ip27oFTH8Y 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ip27oFTH8Y' 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=329127 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 329127 /var/tmp/bdevperf.sock 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 329127 ']' 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:10.741 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.741 [2024-07-25 13:51:06.377017] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:23:10.741 [2024-07-25 13:51:06.377072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid329127 ] 00:23:10.741 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.741 [2024-07-25 13:51:06.414523] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:10.741 [2024-07-25 13:51:06.446413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.741 [2024-07-25 13:51:06.481135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.741 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:10.741 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ip27oFTH8Y 00:23:10.742 [2024-07-25 13:51:07.310947] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:10.742 [2024-07-25 13:51:07.311036] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:10.742 [2024-07-25 13:51:07.315722] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:10.742 [2024-07-25 13:51:07.316353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cfe50 (107): Transport endpoint is not connected 00:23:10.742 [2024-07-25 13:51:07.317345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cfe50 (9): Bad file descriptor 00:23:10.742 [2024-07-25 13:51:07.318350] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.742 [2024-07-25 13:51:07.318364] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:10.742 [2024-07-25 13:51:07.318376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.742 request: 00:23:10.742 { 00:23:10.742 "name": "TLSTEST", 00:23:10.742 "trtype": "tcp", 00:23:10.742 "traddr": "10.0.0.2", 00:23:10.742 "adrfam": "ipv4", 00:23:10.742 "trsvcid": "4420", 00:23:10.742 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.742 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:10.742 "prchk_reftag": false, 00:23:10.742 "prchk_guard": false, 00:23:10.742 "hdgst": false, 00:23:10.742 "ddgst": false, 00:23:10.742 "psk": "/tmp/tmp.ip27oFTH8Y", 00:23:10.742 "method": "bdev_nvme_attach_controller", 00:23:10.742 "req_id": 1 00:23:10.742 } 00:23:10.742 Got JSON-RPC error response 00:23:10.742 response: 00:23:10.742 { 00:23:10.742 "code": -5, 00:23:10.742 "message": "Input/output error" 00:23:10.742 } 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 329127 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 329127 ']' 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 329127 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 329127 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 329127' 00:23:10.742 killing process with pid 329127 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 329127 00:23:10.742 Received shutdown signal, test time was about 10.000000 seconds 00:23:10.742 00:23:10.742 Latency(us) 00:23:10.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.742 =================================================================================================================== 00:23:10.742 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:10.742 [2024-07-25 13:51:07.389872] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 329127 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.BSvDb5G76M 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.BSvDb5G76M 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.BSvDb5G76M 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.BSvDb5G76M' 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=329296 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 329296 /var/tmp/bdevperf.sock 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 329296 ']' 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:10.742 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.742 [2024-07-25 13:51:07.602405] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:23:10.742 [2024-07-25 13:51:07.602461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid329296 ] 00:23:11.002 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.002 [2024-07-25 13:51:07.638385] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:11.002 [2024-07-25 13:51:07.670592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.002 [2024-07-25 13:51:07.707980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:11.002 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:11.002 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:11.002 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.BSvDb5G76M 00:23:11.262 [2024-07-25 13:51:07.939600] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:11.262 [2024-07-25 13:51:07.939699] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:11.262 [2024-07-25 13:51:07.946982] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:11.262 [2024-07-25 13:51:07.947005] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:11.262 [2024-07-25 13:51:07.947031] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:11.262 [2024-07-25 13:51:07.947972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bde50 (107): Transport endpoint is not connected 00:23:11.262 [2024-07-25 13:51:07.948965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bde50 (9): Bad file descriptor 00:23:11.262 [2024-07-25 13:51:07.949966] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.262 [2024-07-25 13:51:07.949978] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:11.262 [2024-07-25 13:51:07.949990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.262 request: 00:23:11.262 { 00:23:11.262 "name": "TLSTEST", 00:23:11.262 "trtype": "tcp", 00:23:11.262 "traddr": "10.0.0.2", 00:23:11.262 "adrfam": "ipv4", 00:23:11.262 "trsvcid": "4420", 00:23:11.262 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.262 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:11.262 "prchk_reftag": false, 00:23:11.262 "prchk_guard": false, 00:23:11.262 "hdgst": false, 00:23:11.262 "ddgst": false, 00:23:11.262 "psk": "/tmp/tmp.BSvDb5G76M", 00:23:11.262 "method": "bdev_nvme_attach_controller", 00:23:11.262 "req_id": 1 00:23:11.262 } 00:23:11.262 Got JSON-RPC error response 00:23:11.262 response: 00:23:11.262 { 00:23:11.262 "code": -5, 00:23:11.262 "message": "Input/output error" 00:23:11.262 } 00:23:11.262 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 329296 00:23:11.262 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 329296 ']' 00:23:11.262 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 329296 00:23:11.262 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:11.262 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:11.262 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 329296 00:23:11.262 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:11.262 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:11.262 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 329296' 00:23:11.262 killing process with pid 329296 00:23:11.262 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 329296 00:23:11.262 Received shutdown signal, test time was about 10.000000 seconds 00:23:11.262 00:23:11.262 Latency(us) 00:23:11.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.262 =================================================================================================================== 00:23:11.262 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:11.262 [2024-07-25 13:51:08.024837] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:11.262 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 329296 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.BSvDb5G76M 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.BSvDb5G76M 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.BSvDb5G76M 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.BSvDb5G76M' 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=329407 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 329407 /var/tmp/bdevperf.sock 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 329407 ']' 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:11.522 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.522 [2024-07-25 13:51:08.234926] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:23:11.522 [2024-07-25 13:51:08.234979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid329407 ] 00:23:11.522 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.522 [2024-07-25 13:51:08.272435] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:11.522 [2024-07-25 13:51:08.303679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.522 [2024-07-25 13:51:08.339181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:11.782 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:11.782 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:11.782 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BSvDb5G76M 00:23:11.782 [2024-07-25 13:51:08.566406] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:11.782 [2024-07-25 13:51:08.566500] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:11.782 [2024-07-25 13:51:08.571050] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:11.782 [2024-07-25 13:51:08.571073] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:11.782 [2024-07-25 13:51:08.571101] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:11.782 [2024-07-25 13:51:08.571733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f84e50 (107): Transport endpoint is not connected 00:23:11.782 [2024-07-25 13:51:08.572722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f84e50 (9): Bad file descriptor 00:23:11.782 [2024-07-25 13:51:08.573723] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:11.782 [2024-07-25 13:51:08.573735] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:11.782 [2024-07-25 13:51:08.573746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:11.782 request: 00:23:11.782 { 00:23:11.782 "name": "TLSTEST", 00:23:11.782 "trtype": "tcp", 00:23:11.782 "traddr": "10.0.0.2", 00:23:11.782 "adrfam": "ipv4", 00:23:11.782 "trsvcid": "4420", 00:23:11.782 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:11.782 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:11.782 "prchk_reftag": false, 00:23:11.782 "prchk_guard": false, 00:23:11.782 "hdgst": false, 00:23:11.782 "ddgst": false, 00:23:11.782 "psk": "/tmp/tmp.BSvDb5G76M", 00:23:11.782 "method": "bdev_nvme_attach_controller", 00:23:11.782 "req_id": 1 00:23:11.782 } 00:23:11.782 Got JSON-RPC error response 00:23:11.782 response: 00:23:11.782 { 00:23:11.782 "code": -5, 00:23:11.782 "message": "Input/output error" 00:23:11.782 } 00:23:11.782 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 329407 00:23:11.782 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 329407 ']' 00:23:11.782 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 329407 00:23:11.782 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:11.782 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:11.782 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 329407 00:23:11.782 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:11.782 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:11.782 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 329407' 00:23:11.782 killing process with pid 329407 00:23:11.782 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 329407 00:23:11.782 Received shutdown signal, test time was about 10.000000 seconds 00:23:11.782 00:23:11.782 Latency(us) 00:23:11.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.782 =================================================================================================================== 00:23:11.782 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:11.782 [2024-07-25 13:51:08.643361] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:11.782 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 329407 00:23:12.041 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:12.041 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:12.042 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:12.042 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:12.042 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:12.042 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:12.042 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:12.042 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:12.042 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:12.042 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.042 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:12.042 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.042 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:12.042 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:12.042 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:12.042 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:12.042 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:12.042 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:12.042 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=329438 00:23:12.042 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:12.042 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:12.042 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 329438 /var/tmp/bdevperf.sock 00:23:12.042 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 329438 ']' 00:23:12.042 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:12.042 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:12.042 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:12.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:12.042 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:12.042 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.042 [2024-07-25 13:51:08.843236] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:23:12.042 [2024-07-25 13:51:08.843294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid329438 ] 00:23:12.042 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.042 [2024-07-25 13:51:08.881034] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:12.042 [2024-07-25 13:51:08.911523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.301 [2024-07-25 13:51:08.949005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.301 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:12.301 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:12.301 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:12.561 [2024-07-25 13:51:09.200204] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:12.561 [2024-07-25 13:51:09.201353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa48360 (9): Bad file descriptor 00:23:12.561 [2024-07-25 13:51:09.202353] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.561 [2024-07-25 13:51:09.202366] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:12.561 [2024-07-25 13:51:09.202377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.561 request: 00:23:12.561 { 00:23:12.561 "name": "TLSTEST", 00:23:12.561 "trtype": "tcp", 00:23:12.561 "traddr": "10.0.0.2", 00:23:12.561 "adrfam": "ipv4", 00:23:12.561 "trsvcid": "4420", 00:23:12.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:12.561 "prchk_reftag": false, 00:23:12.561 "prchk_guard": false, 00:23:12.561 "hdgst": false, 00:23:12.561 "ddgst": false, 00:23:12.561 "method": "bdev_nvme_attach_controller", 00:23:12.561 "req_id": 1 00:23:12.561 } 00:23:12.561 Got JSON-RPC error response 00:23:12.561 response: 00:23:12.561 { 00:23:12.561 "code": -5, 00:23:12.561 "message": "Input/output error" 00:23:12.561 } 00:23:12.561 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 329438 00:23:12.561 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 329438 ']' 00:23:12.561 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 329438 00:23:12.561 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:12.561 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:12.561 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 329438 00:23:12.561 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:12.561 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:12.561 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 329438' 00:23:12.561 killing process with pid 329438 00:23:12.561 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 329438 00:23:12.561 Received shutdown signal, test time was about 10.000000 seconds 00:23:12.561 00:23:12.561 Latency(us) 00:23:12.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.561 =================================================================================================================== 00:23:12.561 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:12.561 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 329438 00:23:12.561 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:12.561 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:12.561 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:12.561 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:12.561 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:12.561 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 324308 00:23:12.561 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 324308 ']' 00:23:12.561 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 324308 00:23:12.561 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:12.561 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:12.821 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 324308 00:23:12.821 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:12.821 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:12.821 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 324308' 00:23:12.821 killing process with pid 324308 00:23:12.821 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 324308 00:23:12.821 [2024-07-25 13:51:09.493006] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:12.821 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 324308 00:23:12.821 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:12.821 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:12.821 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:12.821 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:12.821 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:12.821 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:23:12.821 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:13.081 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:13.081 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:23:13.081 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.HUBCeDzLAC 00:23:13.081 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:13.081 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.HUBCeDzLAC 00:23:13.081 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:13.081 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:13.081 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:13.081 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.081 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=329704 00:23:13.081 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:13.081 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 329704 00:23:13.081 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 329704 ']' 00:23:13.081 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.081 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:13.081 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.081 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:13.081 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.081 [2024-07-25 13:51:09.789967] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:23:13.081 [2024-07-25 13:51:09.790019] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.081 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.081 [2024-07-25 13:51:09.831268] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:13.081 [2024-07-25 13:51:09.866688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.081 [2024-07-25 13:51:09.905200] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.081 [2024-07-25 13:51:09.905239] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.081 [2024-07-25 13:51:09.905248] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.081 [2024-07-25 13:51:09.905258] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.081 [2024-07-25 13:51:09.905264] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.081 [2024-07-25 13:51:09.905285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.018 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:14.018 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:14.018 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:14.018 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:14.018 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.018 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.018 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.HUBCeDzLAC 00:23:14.018 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.HUBCeDzLAC 00:23:14.018 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:14.018 [2024-07-25 13:51:10.798130] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.018 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:14.277 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:14.277 [2024-07-25 13:51:11.118948] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:14.277 [2024-07-25 13:51:11.119147] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.277 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:14.536 malloc0 00:23:14.536 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:14.795 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HUBCeDzLAC 00:23:14.795 [2024-07-25 13:51:11.632527] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:14.795 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HUBCeDzLAC 00:23:14.795 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:14.796 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:14.796 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:14.796 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.HUBCeDzLAC' 00:23:14.796 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:14.796 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=330001 00:23:14.796 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:14.796 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:14.796 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 330001 /var/tmp/bdevperf.sock 00:23:14.796 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 330001 ']' 00:23:14.796 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:14.796 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:14.796 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:14.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:14.796 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:14.796 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.055 [2024-07-25 13:51:11.691443] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:23:15.055 [2024-07-25 13:51:11.691496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid330001 ] 00:23:15.055 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.055 [2024-07-25 13:51:11.726709] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:15.055 [2024-07-25 13:51:11.757773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.055 [2024-07-25 13:51:11.795990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.055 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:15.055 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:15.055 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HUBCeDzLAC 00:23:15.313 [2024-07-25 13:51:12.031400] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:15.313 [2024-07-25 13:51:12.031478] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:15.313 TLSTESTn1 00:23:15.313 13:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:15.572 Running I/O for 10 seconds... 00:23:25.591 00:23:25.591 Latency(us) 00:23:25.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.591 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:25.591 Verification LBA range: start 0x0 length 0x2000 00:23:25.591 TLSTESTn1 : 10.03 4641.54 18.13 0.00 0.00 27526.74 6658.46 57881.40 00:23:25.591 =================================================================================================================== 00:23:25.591 Total : 4641.54 18.13 0.00 0.00 27526.74 6658.46 57881.40 00:23:25.591 0 00:23:25.591 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:25.591 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 330001 00:23:25.591 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 330001 ']' 00:23:25.591 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 330001 00:23:25.591 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:25.591 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:25.591 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 330001 00:23:25.591 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:25.591 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:25.591 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 330001' 00:23:25.591 killing process with pid 330001 00:23:25.591 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 330001 00:23:25.591 Received shutdown signal, test time was about 10.000000 seconds 00:23:25.591 00:23:25.591 Latency(us) 00:23:25.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.591 =================================================================================================================== 00:23:25.591 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:25.591 [2024-07-25 13:51:22.346136] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:25.591 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 330001 00:23:25.851 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.HUBCeDzLAC 00:23:25.851 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HUBCeDzLAC 00:23:25.851 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:25.851 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HUBCeDzLAC 00:23:25.851 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:25.851 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.851 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:25.851 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.851 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HUBCeDzLAC 00:23:25.851 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:25.851 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:25.851 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:25.851 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.HUBCeDzLAC' 00:23:25.851 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:25.851 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:25.851 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=331836 00:23:25.851 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:25.851 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 331836 /var/tmp/bdevperf.sock 00:23:25.851 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 331836 ']' 00:23:25.851 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:25.851 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:25.851 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:25.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:25.851 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:25.851 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.851 [2024-07-25 13:51:22.563624] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:23:25.851 [2024-07-25 13:51:22.563677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid331836 ] 00:23:25.851 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.851 [2024-07-25 13:51:22.599117] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:25.851 [2024-07-25 13:51:22.630400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.851 [2024-07-25 13:51:22.665105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:26.111 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:26.111 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:26.111 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HUBCeDzLAC 00:23:26.111 [2024-07-25 13:51:22.897159] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:26.111 [2024-07-25 13:51:22.897215] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:26.111 [2024-07-25 13:51:22.897224] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.HUBCeDzLAC 00:23:26.111 request: 00:23:26.111 { 00:23:26.111 "name": "TLSTEST", 00:23:26.111 "trtype": "tcp", 00:23:26.111 "traddr": "10.0.0.2", 00:23:26.111 "adrfam": "ipv4", 00:23:26.111 "trsvcid": "4420", 00:23:26.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.111 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:26.111 "prchk_reftag": false, 00:23:26.111 "prchk_guard": false, 00:23:26.111 "hdgst": false, 00:23:26.111 "ddgst": false, 00:23:26.111 "psk": "/tmp/tmp.HUBCeDzLAC", 00:23:26.111 "method": "bdev_nvme_attach_controller", 00:23:26.111 "req_id": 1 00:23:26.111 } 00:23:26.111 Got JSON-RPC error response 00:23:26.111 response: 00:23:26.111 { 00:23:26.111 "code": -1, 00:23:26.111 "message": "Operation not permitted" 00:23:26.111 } 00:23:26.111 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 331836 00:23:26.111 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 331836 ']' 00:23:26.111 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 331836 00:23:26.111 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:26.111 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:26.111 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 331836 00:23:26.111 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:26.111 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:26.111 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 331836' 00:23:26.111 killing process with pid 331836 00:23:26.111 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 331836 00:23:26.111 Received shutdown signal, test time was about 10.000000 seconds 00:23:26.111 00:23:26.111 Latency(us) 00:23:26.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.111 =================================================================================================================== 00:23:26.111 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:26.111 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 331836 00:23:26.372 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:26.372 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:26.372 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:26.372 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:26.372 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:26.372 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 329704 00:23:26.372 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 329704 ']' 00:23:26.372 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 329704 00:23:26.372 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:26.372 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:26.372 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 329704 00:23:26.372 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:26.372 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:26.372 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 329704' 00:23:26.372 killing process with pid 329704 00:23:26.372 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 329704 00:23:26.372 [2024-07-25 13:51:23.185348] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:26.372 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 329704 00:23:26.635 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:26.635 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:26.635 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:26.635 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.635 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=331886 00:23:26.635 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:26.635 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 331886 00:23:26.635 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 331886 ']' 00:23:26.635 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.635 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:26.635 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.635 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:26.635 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.635 [2024-07-25 13:51:23.423424] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:23:26.635 [2024-07-25 13:51:23.423480] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.635 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.635 [2024-07-25 13:51:23.463967] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:26.635 [2024-07-25 13:51:23.498777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.894 [2024-07-25 13:51:23.535645] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.894 [2024-07-25 13:51:23.535685] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.894 [2024-07-25 13:51:23.535695] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.894 [2024-07-25 13:51:23.535704] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.894 [2024-07-25 13:51:23.535729] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.894 [2024-07-25 13:51:23.535751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.462 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:27.462 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:27.462 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:27.462 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:27.462 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.462 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.462 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.HUBCeDzLAC 00:23:27.462 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:27.462 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.HUBCeDzLAC 00:23:27.462 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:23:27.462 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:27.462 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:23:27.462 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:27.462 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.HUBCeDzLAC 00:23:27.462 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.HUBCeDzLAC 00:23:27.462 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:27.721 [2024-07-25 13:51:24.417141] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.722 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:27.722 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:27.981 [2024-07-25 13:51:24.758012] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:27.981 [2024-07-25 13:51:24.758204] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.981 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:28.241 malloc0 00:23:28.241 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:28.500 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HUBCeDzLAC 00:23:28.500 [2024-07-25 13:51:25.287700] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:28.500 [2024-07-25 13:51:25.287730] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:28.500 [2024-07-25 13:51:25.287752] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:28.500 request: 00:23:28.500 { 00:23:28.500 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.500 "host": "nqn.2016-06.io.spdk:host1", 00:23:28.500 "psk": "/tmp/tmp.HUBCeDzLAC", 00:23:28.500 "method": "nvmf_subsystem_add_host", 00:23:28.500 "req_id": 1 00:23:28.500 } 00:23:28.500 Got JSON-RPC error response 00:23:28.500 response: 00:23:28.500 { 00:23:28.500 "code": -32603, 00:23:28.500 "message": "Internal error" 00:23:28.500 } 00:23:28.500 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:28.500 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:28.500 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:28.500 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:28.500 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 331886 00:23:28.500 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 331886 ']' 00:23:28.500 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 331886 00:23:28.500 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:28.500 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:28.500 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 331886 00:23:28.500 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:28.500 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:28.500 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 331886' 00:23:28.500 killing process with pid 331886 00:23:28.500 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 331886 00:23:28.500 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 331886 00:23:28.760 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.HUBCeDzLAC 00:23:28.760 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:28.760 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:28.760 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:28.760 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.760 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=332407 00:23:28.760 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 332407 00:23:28.760 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:28.760 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 332407 ']' 00:23:28.760 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.760 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:28.760 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.760 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:28.760 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.760 [2024-07-25 13:51:25.604463] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:23:28.760 [2024-07-25 13:51:25.604517] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.760 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.760 [2024-07-25 13:51:25.643902] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:29.018 [2024-07-25 13:51:25.678309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.018 [2024-07-25 13:51:25.715542] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.018 [2024-07-25 13:51:25.715581] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.018 [2024-07-25 13:51:25.715590] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.018 [2024-07-25 13:51:25.715599] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.018 [2024-07-25 13:51:25.715606] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.018 [2024-07-25 13:51:25.715633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.597 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:29.597 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:29.597 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:29.597 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:29.597 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.597 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:29.597 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.HUBCeDzLAC 00:23:29.597 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.HUBCeDzLAC 00:23:29.597 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:29.856 [2024-07-25 13:51:26.600471] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.856 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:30.115 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:30.115 [2024-07-25 13:51:26.941343] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:30.115 [2024-07-25 13:51:26.941549] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.115 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:30.375 malloc0 00:23:30.375 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:30.634 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HUBCeDzLAC 00:23:30.634 [2024-07-25 13:51:27.422805] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:30.634 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:30.634 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=332699 00:23:30.634 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:30.634 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 332699 /var/tmp/bdevperf.sock 00:23:30.634 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 332699 ']' 00:23:30.634 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.634 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:30.634 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.634 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:30.634 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.634 [2024-07-25 13:51:27.465854] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:23:30.634 [2024-07-25 13:51:27.465905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid332699 ] 00:23:30.634 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.634 [2024-07-25 13:51:27.501317] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:30.894 [2024-07-25 13:51:27.531742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.894 [2024-07-25 13:51:27.569397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.894 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:30.894 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:30.894 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HUBCeDzLAC 00:23:31.152 [2024-07-25 13:51:27.816755] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:31.152 [2024-07-25 13:51:27.816829] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:31.152 TLSTESTn1 00:23:31.152 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:31.412 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:31.412 "subsystems": [ 00:23:31.412 { 00:23:31.412 "subsystem": "keyring", 00:23:31.412 "config": [] 00:23:31.412 }, 00:23:31.412 { 00:23:31.412 "subsystem": "iobuf", 00:23:31.412 "config": [ 00:23:31.412 { 00:23:31.412 "method": "iobuf_set_options", 00:23:31.412 "params": { 00:23:31.412 "small_pool_count": 8192, 00:23:31.412 "large_pool_count": 1024, 00:23:31.412 "small_bufsize": 8192, 00:23:31.412 "large_bufsize": 135168 00:23:31.412 } 00:23:31.412 } 00:23:31.412 ] 00:23:31.412 }, 00:23:31.412 { 00:23:31.412 "subsystem": "sock", 00:23:31.412 "config": [ 00:23:31.412 { 00:23:31.412 "method": "sock_set_default_impl", 00:23:31.412 "params": { 00:23:31.412 "impl_name": "posix" 00:23:31.412 } 00:23:31.412 }, 00:23:31.412 { 00:23:31.412 "method": "sock_impl_set_options", 00:23:31.412 "params": { 00:23:31.412 "impl_name": "ssl", 00:23:31.412 "recv_buf_size": 4096, 00:23:31.412 "send_buf_size": 4096, 00:23:31.412 "enable_recv_pipe": true, 00:23:31.412 "enable_quickack": false, 00:23:31.412 "enable_placement_id": 0, 00:23:31.412 "enable_zerocopy_send_server": true, 00:23:31.412 "enable_zerocopy_send_client": false, 00:23:31.412 "zerocopy_threshold": 0, 00:23:31.412 "tls_version": 0, 00:23:31.412 "enable_ktls": false 00:23:31.412 } 00:23:31.412 }, 00:23:31.412 { 00:23:31.412 "method": "sock_impl_set_options", 00:23:31.412 "params": { 00:23:31.412 "impl_name": "posix", 00:23:31.412 "recv_buf_size": 2097152, 00:23:31.412 "send_buf_size": 2097152, 00:23:31.412 "enable_recv_pipe": true, 00:23:31.412 "enable_quickack": false, 00:23:31.412 "enable_placement_id": 0, 00:23:31.412 "enable_zerocopy_send_server": true, 00:23:31.412 "enable_zerocopy_send_client": false, 00:23:31.412 "zerocopy_threshold": 0, 00:23:31.412 "tls_version": 0, 00:23:31.412 "enable_ktls": false 00:23:31.412 } 00:23:31.412 } 00:23:31.412 ] 00:23:31.412 }, 00:23:31.412 { 00:23:31.412 "subsystem": "vmd", 00:23:31.412 "config": [] 00:23:31.412 }, 00:23:31.412 { 00:23:31.412 "subsystem": "accel", 00:23:31.412 "config": [ 00:23:31.412 { 00:23:31.412 "method": "accel_set_options", 00:23:31.412 "params": { 00:23:31.412 "small_cache_size": 128, 00:23:31.412 "large_cache_size": 16, 00:23:31.412 "task_count": 2048, 00:23:31.412 "sequence_count": 2048, 00:23:31.412 "buf_count": 2048 00:23:31.412 } 00:23:31.412 } 00:23:31.412 ] 00:23:31.412 }, 00:23:31.412 { 00:23:31.412 "subsystem": "bdev", 00:23:31.412 "config": [ 00:23:31.412 { 00:23:31.412 "method": "bdev_set_options", 00:23:31.412 "params": { 00:23:31.412 "bdev_io_pool_size": 65535, 00:23:31.412 "bdev_io_cache_size": 256, 00:23:31.412 "bdev_auto_examine": true, 00:23:31.412 "iobuf_small_cache_size": 128, 00:23:31.412 "iobuf_large_cache_size": 16 00:23:31.412 } 00:23:31.412 }, 00:23:31.412 { 00:23:31.412 "method": "bdev_raid_set_options", 00:23:31.412 "params": { 00:23:31.412 "process_window_size_kb": 1024, 00:23:31.412 "process_max_bandwidth_mb_sec": 0 00:23:31.412 } 00:23:31.412 }, 00:23:31.412 { 00:23:31.412 "method": "bdev_iscsi_set_options", 00:23:31.412 "params": { 00:23:31.412 "timeout_sec": 30 00:23:31.412 } 00:23:31.412 }, 00:23:31.412 { 00:23:31.412 "method": "bdev_nvme_set_options", 00:23:31.412 "params": { 00:23:31.412 "action_on_timeout": "none", 00:23:31.412 "timeout_us": 0, 00:23:31.412 "timeout_admin_us": 0, 00:23:31.412 "keep_alive_timeout_ms": 10000, 00:23:31.412 "arbitration_burst": 0, 00:23:31.412 "low_priority_weight": 0, 00:23:31.412 "medium_priority_weight": 0, 00:23:31.412 "high_priority_weight": 0, 00:23:31.412 "nvme_adminq_poll_period_us": 10000, 00:23:31.412 "nvme_ioq_poll_period_us": 0, 00:23:31.412 "io_queue_requests": 0, 00:23:31.412 "delay_cmd_submit": true, 00:23:31.412 "transport_retry_count": 4, 00:23:31.412 "bdev_retry_count": 3, 00:23:31.412 "transport_ack_timeout": 0, 00:23:31.412 "ctrlr_loss_timeout_sec": 0, 00:23:31.412 "reconnect_delay_sec": 0, 00:23:31.412 "fast_io_fail_timeout_sec": 0, 00:23:31.412 "disable_auto_failback": false, 00:23:31.412 "generate_uuids": false, 00:23:31.412 "transport_tos": 0, 00:23:31.412 "nvme_error_stat": false, 00:23:31.412 "rdma_srq_size": 0, 00:23:31.412 "io_path_stat": false, 00:23:31.412 "allow_accel_sequence": false, 00:23:31.412 "rdma_max_cq_size": 0, 00:23:31.412 "rdma_cm_event_timeout_ms": 0, 00:23:31.412 "dhchap_digests": [ 00:23:31.412 "sha256", 00:23:31.412 "sha384", 00:23:31.412 "sha512" 00:23:31.412 ], 00:23:31.412 "dhchap_dhgroups": [ 00:23:31.412 "null", 00:23:31.412 "ffdhe2048", 00:23:31.412 "ffdhe3072", 00:23:31.412 "ffdhe4096", 00:23:31.412 "ffdhe6144", 00:23:31.412 "ffdhe8192" 00:23:31.412 ] 00:23:31.412 } 00:23:31.412 }, 00:23:31.412 { 00:23:31.412 "method": "bdev_nvme_set_hotplug", 00:23:31.412 "params": { 00:23:31.412 "period_us": 100000, 00:23:31.412 "enable": false 00:23:31.412 } 00:23:31.412 }, 00:23:31.412 { 00:23:31.412 "method": "bdev_malloc_create", 00:23:31.412 "params": { 00:23:31.412 "name": "malloc0", 00:23:31.412 "num_blocks": 8192, 00:23:31.412 "block_size": 4096, 00:23:31.412 "physical_block_size": 4096, 00:23:31.412 "uuid": "1d677858-b444-40d1-8ad0-c0c87163fa09", 00:23:31.412 "optimal_io_boundary": 0, 00:23:31.412 "md_size": 0, 00:23:31.412 "dif_type": 0, 00:23:31.412 "dif_is_head_of_md": false, 00:23:31.412 "dif_pi_format": 0 00:23:31.412 } 00:23:31.412 }, 00:23:31.412 { 00:23:31.413 "method": "bdev_wait_for_examine" 00:23:31.413 } 00:23:31.413 ] 00:23:31.413 }, 00:23:31.413 { 00:23:31.413 "subsystem": "nbd", 00:23:31.413 "config": [] 00:23:31.413 }, 00:23:31.413 { 00:23:31.413 "subsystem": "scheduler", 00:23:31.413 "config": [ 00:23:31.413 { 00:23:31.413 "method": "framework_set_scheduler", 00:23:31.413 "params": { 00:23:31.413 "name": "static" 00:23:31.413 } 00:23:31.413 } 00:23:31.413 ] 00:23:31.413 }, 00:23:31.413 { 00:23:31.413 "subsystem": "nvmf", 00:23:31.413 "config": [ 00:23:31.413 { 00:23:31.413 "method": "nvmf_set_config", 00:23:31.413 "params": { 00:23:31.413 "discovery_filter": "match_any", 00:23:31.413 "admin_cmd_passthru": { 00:23:31.413 "identify_ctrlr": false 00:23:31.413 } 00:23:31.413 } 00:23:31.413 }, 00:23:31.413 { 00:23:31.413 "method": "nvmf_set_max_subsystems", 00:23:31.413 "params": { 00:23:31.413 "max_subsystems": 1024 00:23:31.413 } 00:23:31.413 }, 00:23:31.413 { 00:23:31.413 "method": "nvmf_set_crdt", 00:23:31.413 "params": { 00:23:31.413 "crdt1": 0, 00:23:31.413 "crdt2": 0, 00:23:31.413 "crdt3": 0 00:23:31.413 } 00:23:31.413 }, 00:23:31.413 { 00:23:31.413 "method": "nvmf_create_transport", 00:23:31.413 "params": { 00:23:31.413 "trtype": "TCP", 00:23:31.413 "max_queue_depth": 128, 00:23:31.413 "max_io_qpairs_per_ctrlr": 127, 00:23:31.413 "in_capsule_data_size": 4096, 00:23:31.413 "max_io_size": 131072, 00:23:31.413 "io_unit_size": 131072, 00:23:31.413 "max_aq_depth": 128, 00:23:31.413 "num_shared_buffers": 511, 00:23:31.413 "buf_cache_size": 4294967295, 00:23:31.413 "dif_insert_or_strip": false, 00:23:31.413 "zcopy": false, 00:23:31.413 "c2h_success": false, 00:23:31.413 "sock_priority": 0, 00:23:31.413 "abort_timeout_sec": 1, 00:23:31.413 "ack_timeout": 0, 00:23:31.413 "data_wr_pool_size": 0 00:23:31.413 } 00:23:31.413 }, 00:23:31.413 { 00:23:31.413 "method": "nvmf_create_subsystem", 00:23:31.413 "params": { 00:23:31.413 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.413 "allow_any_host": false, 00:23:31.413 "serial_number": "SPDK00000000000001", 00:23:31.413 "model_number": "SPDK bdev Controller", 00:23:31.413 "max_namespaces": 10, 00:23:31.413 "min_cntlid": 1, 00:23:31.413 "max_cntlid": 65519, 00:23:31.413 "ana_reporting": false 00:23:31.413 } 00:23:31.413 }, 00:23:31.413 { 00:23:31.413 "method": "nvmf_subsystem_add_host", 00:23:31.413 "params": { 00:23:31.413 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.413 "host": "nqn.2016-06.io.spdk:host1", 00:23:31.413 "psk": "/tmp/tmp.HUBCeDzLAC" 00:23:31.413 } 00:23:31.413 }, 00:23:31.413 { 00:23:31.413 "method": "nvmf_subsystem_add_ns", 00:23:31.413 "params": { 00:23:31.413 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.413 "namespace": { 00:23:31.413 "nsid": 1, 00:23:31.413 "bdev_name": "malloc0", 00:23:31.413 "nguid": "1D677858B44440D18AD0C0C87163FA09", 00:23:31.413 "uuid": "1d677858-b444-40d1-8ad0-c0c87163fa09", 00:23:31.413 "no_auto_visible": false 00:23:31.413 } 00:23:31.413 } 00:23:31.413 }, 00:23:31.413 { 00:23:31.413 "method": "nvmf_subsystem_add_listener", 00:23:31.413 "params": { 00:23:31.413 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.413 "listen_address": { 00:23:31.413 "trtype": "TCP", 00:23:31.413 "adrfam": "IPv4", 00:23:31.413 "traddr": "10.0.0.2", 00:23:31.413 "trsvcid": "4420" 00:23:31.413 }, 00:23:31.413 "secure_channel": true 00:23:31.413 } 00:23:31.413 } 00:23:31.413 ] 00:23:31.413 } 00:23:31.413 ] 00:23:31.413 }' 00:23:31.413 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:31.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:31.674 "subsystems": [ 00:23:31.674 { 00:23:31.674 "subsystem": "keyring", 00:23:31.674 "config": [] 00:23:31.674 }, 00:23:31.674 { 00:23:31.674 "subsystem": "iobuf", 00:23:31.674 "config": [ 00:23:31.674 { 00:23:31.674 "method": "iobuf_set_options", 00:23:31.674 "params": { 00:23:31.674 "small_pool_count": 8192, 00:23:31.674 "large_pool_count": 1024, 00:23:31.674 "small_bufsize": 8192, 00:23:31.674 "large_bufsize": 135168 00:23:31.674 } 00:23:31.674 } 00:23:31.674 ] 00:23:31.674 }, 00:23:31.674 { 00:23:31.674 "subsystem": "sock", 00:23:31.674 "config": [ 00:23:31.674 { 00:23:31.674 "method": "sock_set_default_impl", 00:23:31.674 "params": { 00:23:31.674 "impl_name": "posix" 00:23:31.674 } 00:23:31.674 }, 00:23:31.674 { 00:23:31.674 "method": "sock_impl_set_options", 00:23:31.674 "params": { 00:23:31.674 "impl_name": "ssl", 00:23:31.674 "recv_buf_size": 4096, 00:23:31.674 "send_buf_size": 4096, 00:23:31.674 "enable_recv_pipe": true, 00:23:31.674 "enable_quickack": false, 00:23:31.674 "enable_placement_id": 0, 00:23:31.674 "enable_zerocopy_send_server": true, 00:23:31.674 "enable_zerocopy_send_client": false, 00:23:31.674 "zerocopy_threshold": 0, 00:23:31.674 "tls_version": 0, 00:23:31.674 "enable_ktls": false 00:23:31.674 } 00:23:31.674 }, 00:23:31.674 { 00:23:31.674 "method": "sock_impl_set_options", 00:23:31.674 "params": { 00:23:31.674 "impl_name": "posix", 00:23:31.674 "recv_buf_size": 2097152, 00:23:31.674 "send_buf_size": 2097152, 00:23:31.674 "enable_recv_pipe": true, 00:23:31.674 "enable_quickack": false, 00:23:31.674 "enable_placement_id": 0, 00:23:31.674 "enable_zerocopy_send_server": true, 00:23:31.674 "enable_zerocopy_send_client": false, 00:23:31.674 "zerocopy_threshold": 0, 00:23:31.674 "tls_version": 0, 00:23:31.674 "enable_ktls": false 00:23:31.674 } 00:23:31.674 } 00:23:31.674 ] 00:23:31.674 }, 00:23:31.674 { 00:23:31.674 "subsystem": "vmd", 00:23:31.674 "config": [] 00:23:31.674 }, 00:23:31.674 { 00:23:31.674 "subsystem": "accel", 00:23:31.674 "config": [ 00:23:31.674 { 00:23:31.674 "method": "accel_set_options", 00:23:31.674 "params": { 00:23:31.674 "small_cache_size": 128, 00:23:31.674 "large_cache_size": 16, 00:23:31.674 "task_count": 2048, 00:23:31.674 "sequence_count": 2048, 00:23:31.674 "buf_count": 2048 00:23:31.674 } 00:23:31.674 } 00:23:31.674 ] 00:23:31.674 }, 00:23:31.674 { 00:23:31.674 "subsystem": "bdev", 00:23:31.674 "config": [ 00:23:31.674 { 00:23:31.674 "method": "bdev_set_options", 00:23:31.674 "params": { 00:23:31.674 "bdev_io_pool_size": 65535, 00:23:31.674 "bdev_io_cache_size": 256, 00:23:31.674 "bdev_auto_examine": true, 00:23:31.674 "iobuf_small_cache_size": 128, 00:23:31.674 "iobuf_large_cache_size": 16 00:23:31.674 } 00:23:31.674 }, 00:23:31.674 { 00:23:31.674 "method": "bdev_raid_set_options", 00:23:31.674 "params": { 00:23:31.674 "process_window_size_kb": 1024, 00:23:31.674 "process_max_bandwidth_mb_sec": 0 00:23:31.674 } 00:23:31.674 }, 00:23:31.674 { 00:23:31.674 "method": "bdev_iscsi_set_options", 00:23:31.674 "params": { 00:23:31.674 "timeout_sec": 30 00:23:31.674 } 00:23:31.674 }, 00:23:31.674 { 00:23:31.674 "method": "bdev_nvme_set_options", 00:23:31.674 "params": { 00:23:31.674 "action_on_timeout": "none", 00:23:31.674 "timeout_us": 0, 00:23:31.674 "timeout_admin_us": 0, 00:23:31.674 "keep_alive_timeout_ms": 10000, 00:23:31.674 "arbitration_burst": 0, 00:23:31.674 "low_priority_weight": 0, 00:23:31.674 "medium_priority_weight": 0, 00:23:31.674 "high_priority_weight": 0, 00:23:31.674 "nvme_adminq_poll_period_us": 10000, 00:23:31.674 "nvme_ioq_poll_period_us": 0, 00:23:31.674 "io_queue_requests": 512, 00:23:31.674 "delay_cmd_submit": true, 00:23:31.674 "transport_retry_count": 4, 00:23:31.674 "bdev_retry_count": 3, 00:23:31.674 "transport_ack_timeout": 0, 00:23:31.674 "ctrlr_loss_timeout_sec": 0, 00:23:31.674 "reconnect_delay_sec": 0, 00:23:31.674 "fast_io_fail_timeout_sec": 0, 00:23:31.674 "disable_auto_failback": false, 00:23:31.674 "generate_uuids": false, 00:23:31.674 "transport_tos": 0, 00:23:31.674 "nvme_error_stat": false, 00:23:31.674 "rdma_srq_size": 0, 00:23:31.674 "io_path_stat": false, 00:23:31.674 "allow_accel_sequence": false, 00:23:31.674 "rdma_max_cq_size": 0, 00:23:31.674 "rdma_cm_event_timeout_ms": 0, 00:23:31.674 "dhchap_digests": [ 00:23:31.674 "sha256", 00:23:31.674 "sha384", 00:23:31.674 "sha512" 00:23:31.674 ], 00:23:31.674 "dhchap_dhgroups": [ 00:23:31.674 "null", 00:23:31.674 "ffdhe2048", 00:23:31.674 "ffdhe3072", 00:23:31.674 "ffdhe4096", 00:23:31.674 "ffdhe6144", 00:23:31.674 "ffdhe8192" 00:23:31.674 ] 00:23:31.674 } 00:23:31.674 }, 00:23:31.674 { 00:23:31.674 "method": "bdev_nvme_attach_controller", 00:23:31.674 "params": { 00:23:31.674 "name": "TLSTEST", 00:23:31.674 "trtype": "TCP", 00:23:31.674 "adrfam": "IPv4", 00:23:31.674 "traddr": "10.0.0.2", 00:23:31.674 "trsvcid": "4420", 00:23:31.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.674 "prchk_reftag": false, 00:23:31.674 "prchk_guard": false, 00:23:31.674 "ctrlr_loss_timeout_sec": 0, 00:23:31.674 "reconnect_delay_sec": 0, 00:23:31.674 "fast_io_fail_timeout_sec": 0, 00:23:31.674 "psk": "/tmp/tmp.HUBCeDzLAC", 00:23:31.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:31.674 "hdgst": false, 00:23:31.674 "ddgst": false 00:23:31.674 } 00:23:31.674 }, 00:23:31.674 { 00:23:31.674 "method": "bdev_nvme_set_hotplug", 00:23:31.674 "params": { 00:23:31.674 "period_us": 100000, 00:23:31.674 "enable": false 00:23:31.674 } 00:23:31.674 }, 00:23:31.674 { 00:23:31.674 "method": "bdev_wait_for_examine" 00:23:31.674 } 00:23:31.674 ] 00:23:31.674 }, 00:23:31.674 { 00:23:31.674 "subsystem": "nbd", 00:23:31.674 "config": [] 00:23:31.674 } 00:23:31.674 ] 00:23:31.674 }' 00:23:31.674 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 332699 00:23:31.674 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 332699 ']' 00:23:31.674 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 332699 00:23:31.674 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:31.674 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:31.674 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 332699 00:23:31.674 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:31.674 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:31.674 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 332699' 00:23:31.674 killing process with pid 332699 00:23:31.674 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 332699 00:23:31.674 Received shutdown signal, test time was about 10.000000 seconds 00:23:31.674 00:23:31.674 Latency(us) 00:23:31.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.674 =================================================================================================================== 00:23:31.674 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:31.674 [2024-07-25 13:51:28.481788] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:31.675 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 332699 00:23:31.934 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 332407 00:23:31.934 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 332407 ']' 00:23:31.934 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 332407 00:23:31.934 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:31.934 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:31.934 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 332407 00:23:31.934 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:31.934 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:31.934 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 332407' 00:23:31.934 killing process with pid 332407 00:23:31.934 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 332407 00:23:31.934 [2024-07-25 13:51:28.705338] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:31.934 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 332407 00:23:32.195 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:32.195 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:32.195 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:32.195 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:32.195 "subsystems": [ 00:23:32.195 { 00:23:32.195 "subsystem": "keyring", 00:23:32.195 "config": [] 00:23:32.195 }, 00:23:32.195 { 00:23:32.195 "subsystem": "iobuf", 00:23:32.195 "config": [ 00:23:32.195 { 00:23:32.195 "method": "iobuf_set_options", 00:23:32.195 "params": { 00:23:32.195 "small_pool_count": 8192, 00:23:32.195 "large_pool_count": 1024, 00:23:32.195 "small_bufsize": 8192, 00:23:32.195 "large_bufsize": 135168 00:23:32.195 } 00:23:32.195 } 00:23:32.195 ] 00:23:32.195 }, 00:23:32.195 { 00:23:32.195 "subsystem": "sock", 00:23:32.195 "config": [ 00:23:32.195 { 00:23:32.195 "method": "sock_set_default_impl", 00:23:32.195 "params": { 00:23:32.195 "impl_name": "posix" 00:23:32.195 } 00:23:32.195 }, 00:23:32.195 { 00:23:32.195 "method": "sock_impl_set_options", 00:23:32.195 "params": { 00:23:32.195 "impl_name": "ssl", 00:23:32.195 "recv_buf_size": 4096, 00:23:32.195 "send_buf_size": 4096, 00:23:32.195 "enable_recv_pipe": true, 00:23:32.195 "enable_quickack": false, 00:23:32.195 "enable_placement_id": 0, 00:23:32.195 "enable_zerocopy_send_server": true, 00:23:32.195 "enable_zerocopy_send_client": false, 00:23:32.195 "zerocopy_threshold": 0, 00:23:32.195 "tls_version": 0, 00:23:32.195 "enable_ktls": false 00:23:32.195 } 00:23:32.195 }, 00:23:32.195 { 00:23:32.195 "method": "sock_impl_set_options", 00:23:32.195 "params": { 00:23:32.195 "impl_name": "posix", 00:23:32.195 "recv_buf_size": 2097152, 00:23:32.195 "send_buf_size": 2097152, 00:23:32.195 "enable_recv_pipe": true, 00:23:32.195 "enable_quickack": false, 00:23:32.195 "enable_placement_id": 0, 00:23:32.195 "enable_zerocopy_send_server": true, 00:23:32.195 "enable_zerocopy_send_client": false, 00:23:32.195 "zerocopy_threshold": 0, 00:23:32.195 "tls_version": 0, 00:23:32.195 "enable_ktls": false 00:23:32.195 } 00:23:32.195 } 00:23:32.195 ] 00:23:32.195 }, 00:23:32.195 { 00:23:32.195 "subsystem": "vmd", 00:23:32.195 "config": [] 00:23:32.195 }, 00:23:32.195 { 00:23:32.195 "subsystem": "accel", 00:23:32.195 "config": [ 00:23:32.195 { 00:23:32.195 "method": "accel_set_options", 00:23:32.195 "params": { 00:23:32.195 "small_cache_size": 128, 00:23:32.195 "large_cache_size": 16, 00:23:32.195 "task_count": 2048, 00:23:32.195 "sequence_count": 2048, 00:23:32.195 "buf_count": 2048 00:23:32.195 } 00:23:32.195 } 00:23:32.195 ] 00:23:32.195 }, 00:23:32.195 { 00:23:32.195 "subsystem": "bdev", 00:23:32.195 "config": [ 00:23:32.195 { 00:23:32.195 "method": "bdev_set_options", 00:23:32.195 "params": { 00:23:32.195 "bdev_io_pool_size": 65535, 00:23:32.195 "bdev_io_cache_size": 256, 00:23:32.195 "bdev_auto_examine": true, 00:23:32.195 "iobuf_small_cache_size": 128, 00:23:32.195 "iobuf_large_cache_size": 16 00:23:32.195 } 00:23:32.195 }, 00:23:32.195 { 00:23:32.195 "method": "bdev_raid_set_options", 00:23:32.195 "params": { 00:23:32.195 "process_window_size_kb": 1024, 00:23:32.195 "process_max_bandwidth_mb_sec": 0 00:23:32.195 } 00:23:32.195 }, 00:23:32.195 { 00:23:32.195 "method": "bdev_iscsi_set_options", 00:23:32.195 "params": { 00:23:32.195 "timeout_sec": 30 00:23:32.195 } 00:23:32.195 }, 00:23:32.195 { 00:23:32.195 "method": "bdev_nvme_set_options", 00:23:32.195 "params": { 00:23:32.195 "action_on_timeout": "none", 00:23:32.195 "timeout_us": 0, 00:23:32.195 "timeout_admin_us": 0, 00:23:32.195 "keep_alive_timeout_ms": 10000, 00:23:32.195 "arbitration_burst": 0, 00:23:32.195 "low_priority_weight": 0, 00:23:32.195 "medium_priority_weight": 0, 00:23:32.195 "high_priority_weight": 0, 00:23:32.195 "nvme_adminq_poll_period_us": 10000, 00:23:32.195 "nvme_ioq_poll_period_us": 0, 00:23:32.195 "io_queue_requests": 0, 00:23:32.195 "delay_cmd_submit": true, 00:23:32.196 "transport_retry_count": 4, 00:23:32.196 "bdev_retry_count": 3, 00:23:32.196 "transport_ack_timeout": 0, 00:23:32.196 "ctrlr_loss_timeout_sec": 0, 00:23:32.196 "reconnect_delay_sec": 0, 00:23:32.196 "fast_io_fail_timeout_sec": 0, 00:23:32.196 "disable_auto_failback": false, 00:23:32.196 "generate_uuids": false, 00:23:32.196 "transport_tos": 0, 00:23:32.196 "nvme_error_stat": false, 00:23:32.196 "rdma_srq_size": 0, 00:23:32.196 "io_path_stat": false, 00:23:32.196 "allow_accel_sequence": false, 00:23:32.196 "rdma_max_cq_size": 0, 00:23:32.196 "rdma_cm_event_timeout_ms": 0, 00:23:32.196 "dhchap_digests": [ 00:23:32.196 "sha256", 00:23:32.196 "sha384", 00:23:32.196 "sha512" 00:23:32.196 ], 00:23:32.196 "dhchap_dhgroups": [ 00:23:32.196 "null", 00:23:32.196 "ffdhe2048", 00:23:32.196 "ffdhe3072", 00:23:32.196 "ffdhe4096", 00:23:32.196 "ffdhe6144", 00:23:32.196 "ffdhe8192" 00:23:32.196 ] 00:23:32.196 } 00:23:32.196 }, 00:23:32.196 { 00:23:32.196 "method": "bdev_nvme_set_hotplug", 00:23:32.196 "params": { 00:23:32.196 "period_us": 100000, 00:23:32.196 "enable": false 00:23:32.196 } 00:23:32.196 }, 00:23:32.196 { 00:23:32.196 "method": "bdev_malloc_create", 00:23:32.196 "params": { 00:23:32.196 "name": "malloc0", 00:23:32.196 "num_blocks": 8192, 00:23:32.196 "block_size": 4096, 00:23:32.196 "physical_block_size": 4096, 00:23:32.196 "uuid": "1d677858-b444-40d1-8ad0-c0c87163fa09", 00:23:32.196 "optimal_io_boundary": 0, 00:23:32.196 "md_size": 0, 00:23:32.196 "dif_type": 0, 00:23:32.196 "dif_is_head_of_md": false, 00:23:32.196 "dif_pi_format": 0 00:23:32.196 } 00:23:32.196 }, 00:23:32.196 { 00:23:32.196 "method": "bdev_wait_for_examine" 00:23:32.196 } 00:23:32.196 ] 00:23:32.196 }, 00:23:32.196 { 00:23:32.196 "subsystem": "nbd", 00:23:32.196 "config": [] 00:23:32.196 }, 00:23:32.196 { 00:23:32.196 "subsystem": "scheduler", 00:23:32.196 "config": [ 00:23:32.196 { 00:23:32.196 "method": "framework_set_scheduler", 00:23:32.196 "params": { 00:23:32.196 "name": "static" 00:23:32.196 } 00:23:32.196 } 00:23:32.196 ] 00:23:32.196 }, 00:23:32.196 { 00:23:32.196 "subsystem": "nvmf", 00:23:32.196 "config": [ 00:23:32.196 { 00:23:32.196 "method": "nvmf_set_config", 00:23:32.196 "params": { 00:23:32.196 "discovery_filter": "match_any", 00:23:32.196 "admin_cmd_passthru": { 00:23:32.196 "identify_ctrlr": false 00:23:32.196 } 00:23:32.196 } 00:23:32.196 }, 00:23:32.196 { 00:23:32.196 "method": "nvmf_set_max_subsystems", 00:23:32.196 "params": { 00:23:32.196 "max_subsystems": 1024 00:23:32.196 } 00:23:32.196 }, 00:23:32.196 { 00:23:32.196 "method": "nvmf_set_crdt", 00:23:32.196 "params": { 00:23:32.196 "crdt1": 0, 00:23:32.196 "crdt2": 0, 00:23:32.196 "crdt3": 0 00:23:32.196 } 00:23:32.196 }, 00:23:32.196 { 00:23:32.196 "method": "nvmf_create_transport", 00:23:32.196 "params": { 00:23:32.196 "trtype": "TCP", 00:23:32.196 "max_queue_depth": 128, 00:23:32.196 "max_io_qpairs_per_ctrlr": 127, 00:23:32.196 "in_capsule_data_size": 4096, 00:23:32.196 "max_io_size": 131072, 00:23:32.196 "io_unit_size": 131072, 00:23:32.196 "max_aq_depth": 128, 00:23:32.196 "num_shared_buffers": 511, 00:23:32.196 "buf_cache_size": 4294967295, 00:23:32.196 "dif_insert_or_strip": false, 00:23:32.196 "zcopy": false, 00:23:32.196 "c2h_success": false, 00:23:32.196 "sock_priority": 0, 00:23:32.196 "abort_timeout_sec": 1, 00:23:32.196 "ack_timeout": 0, 00:23:32.196 "data_wr_pool_size": 0 00:23:32.196 } 00:23:32.196 }, 00:23:32.196 { 00:23:32.196 "method": "nvmf_create_subsystem", 00:23:32.196 "params": { 00:23:32.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.196 "allow_any_host": false, 00:23:32.196 "serial_number": "SPDK00000000000001", 00:23:32.196 "model_number": "SPDK bdev Controller", 00:23:32.196 "max_namespaces": 10, 00:23:32.196 "min_cntlid": 1, 00:23:32.196 "max_cntlid": 65519, 00:23:32.196 "ana_reporting": false 00:23:32.196 } 00:23:32.196 }, 00:23:32.196 { 00:23:32.196 "method": "nvmf_subsystem_add_host", 00:23:32.196 "params": { 00:23:32.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.196 "host": "nqn.2016-06.io.spdk:host1", 00:23:32.196 "psk": "/tmp/tmp.HUBCeDzLAC" 00:23:32.196 } 00:23:32.196 }, 00:23:32.196 { 00:23:32.196 "method": "nvmf_subsystem_add_ns", 00:23:32.196 "params": { 00:23:32.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.196 "namespace": { 00:23:32.196 "nsid": 1, 00:23:32.196 "bdev_name": "malloc0", 00:23:32.196 "nguid": "1D677858B44440D18AD0C0C87163FA09", 00:23:32.196 "uuid": "1d677858-b444-40d1-8ad0-c0c87163fa09", 00:23:32.196 "no_auto_visible": false 00:23:32.196 } 00:23:32.196 } 00:23:32.196 }, 00:23:32.196 { 00:23:32.196 "method": "nvmf_subsystem_add_listener", 00:23:32.196 "params": { 00:23:32.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.196 "listen_address": { 00:23:32.196 "trtype": "TCP", 00:23:32.196 "adrfam": "IPv4", 00:23:32.196 "traddr": "10.0.0.2", 00:23:32.196 "trsvcid": "4420" 00:23:32.196 }, 00:23:32.196 "secure_channel": true 00:23:32.196 } 00:23:32.196 } 00:23:32.196 ] 00:23:32.196 } 00:23:32.196 ] 00:23:32.196 }' 00:23:32.196 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.196 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=332981 00:23:32.196 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:32.196 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 332981 00:23:32.196 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 332981 ']' 00:23:32.196 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.196 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:32.196 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.196 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:32.196 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.196 [2024-07-25 13:51:28.940356] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:23:32.196 [2024-07-25 13:51:28.940409] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.196 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.196 [2024-07-25 13:51:28.981414] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:32.196 [2024-07-25 13:51:29.015759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.196 [2024-07-25 13:51:29.053519] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.196 [2024-07-25 13:51:29.053559] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.196 [2024-07-25 13:51:29.053569] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.196 [2024-07-25 13:51:29.053577] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.196 [2024-07-25 13:51:29.053584] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.196 [2024-07-25 13:51:29.053641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.456 [2024-07-25 13:51:29.250764] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.456 [2024-07-25 13:51:29.275572] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:32.456 [2024-07-25 13:51:29.291594] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:32.456 [2024-07-25 13:51:29.291807] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.025 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:33.025 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:33.025 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:33.025 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:33.025 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.025 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.025 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=333073 00:23:33.025 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 333073 /var/tmp/bdevperf.sock 00:23:33.025 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 333073 ']' 00:23:33.025 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.025 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:33.025 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.025 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:33.025 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:33.025 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.025 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:33.025 "subsystems": [ 00:23:33.025 { 00:23:33.025 "subsystem": "keyring", 00:23:33.025 "config": [] 00:23:33.025 }, 00:23:33.025 { 00:23:33.025 "subsystem": "iobuf", 00:23:33.025 "config": [ 00:23:33.025 { 00:23:33.025 "method": "iobuf_set_options", 00:23:33.025 "params": { 00:23:33.025 "small_pool_count": 8192, 00:23:33.025 "large_pool_count": 1024, 00:23:33.025 "small_bufsize": 8192, 00:23:33.025 "large_bufsize": 135168 00:23:33.025 } 00:23:33.025 } 00:23:33.025 ] 00:23:33.025 }, 00:23:33.025 { 00:23:33.025 "subsystem": "sock", 00:23:33.025 "config": [ 00:23:33.025 { 00:23:33.025 "method": "sock_set_default_impl", 00:23:33.025 "params": { 00:23:33.025 "impl_name": "posix" 00:23:33.025 } 00:23:33.025 }, 00:23:33.025 { 00:23:33.025 "method": "sock_impl_set_options", 00:23:33.025 "params": { 00:23:33.025 "impl_name": "ssl", 00:23:33.025 "recv_buf_size": 4096, 00:23:33.025 "send_buf_size": 4096, 00:23:33.025 "enable_recv_pipe": true, 00:23:33.025 "enable_quickack": false, 00:23:33.026 "enable_placement_id": 0, 00:23:33.026 "enable_zerocopy_send_server": true, 00:23:33.026 "enable_zerocopy_send_client": false, 00:23:33.026 "zerocopy_threshold": 0, 00:23:33.026 "tls_version": 0, 00:23:33.026 "enable_ktls": false 00:23:33.026 } 00:23:33.026 }, 00:23:33.026 { 00:23:33.026 "method": "sock_impl_set_options", 00:23:33.026 "params": { 00:23:33.026 "impl_name": "posix", 00:23:33.026 "recv_buf_size": 2097152, 00:23:33.026 "send_buf_size": 2097152, 00:23:33.026 "enable_recv_pipe": true, 00:23:33.026 "enable_quickack": false, 00:23:33.026 "enable_placement_id": 0, 00:23:33.026 "enable_zerocopy_send_server": true, 00:23:33.026 "enable_zerocopy_send_client": false, 00:23:33.026 "zerocopy_threshold": 0, 00:23:33.026 "tls_version": 0, 00:23:33.026 "enable_ktls": false 00:23:33.026 } 00:23:33.026 } 00:23:33.026 ] 00:23:33.026 }, 00:23:33.026 { 00:23:33.026 "subsystem": "vmd", 00:23:33.026 "config": [] 00:23:33.026 }, 00:23:33.026 { 00:23:33.026 "subsystem": "accel", 00:23:33.026 "config": [ 00:23:33.026 { 00:23:33.026 "method": "accel_set_options", 00:23:33.026 "params": { 00:23:33.026 "small_cache_size": 128, 00:23:33.026 "large_cache_size": 16, 00:23:33.026 "task_count": 2048, 00:23:33.026 "sequence_count": 2048, 00:23:33.026 "buf_count": 2048 00:23:33.026 } 00:23:33.026 } 00:23:33.026 ] 00:23:33.026 }, 00:23:33.026 { 00:23:33.026 "subsystem": "bdev", 00:23:33.026 "config": [ 00:23:33.026 { 00:23:33.026 "method": "bdev_set_options", 00:23:33.026 "params": { 00:23:33.026 "bdev_io_pool_size": 65535, 00:23:33.026 "bdev_io_cache_size": 256, 00:23:33.026 "bdev_auto_examine": true, 00:23:33.026 "iobuf_small_cache_size": 128, 00:23:33.026 "iobuf_large_cache_size": 16 00:23:33.026 } 00:23:33.026 }, 00:23:33.026 { 00:23:33.026 "method": "bdev_raid_set_options", 00:23:33.026 "params": { 00:23:33.026 "process_window_size_kb": 1024, 00:23:33.026 "process_max_bandwidth_mb_sec": 0 00:23:33.026 } 00:23:33.026 }, 00:23:33.026 { 00:23:33.026 "method": "bdev_iscsi_set_options", 00:23:33.026 "params": { 00:23:33.026 "timeout_sec": 30 00:23:33.026 } 00:23:33.026 }, 00:23:33.026 { 00:23:33.026 "method": "bdev_nvme_set_options", 00:23:33.026 "params": { 00:23:33.026 "action_on_timeout": "none", 00:23:33.026 "timeout_us": 0, 00:23:33.026 "timeout_admin_us": 0, 00:23:33.026 "keep_alive_timeout_ms": 10000, 00:23:33.026 "arbitration_burst": 0, 00:23:33.026 "low_priority_weight": 0, 00:23:33.026 "medium_priority_weight": 0, 00:23:33.026 "high_priority_weight": 0, 00:23:33.026 "nvme_adminq_poll_period_us": 10000, 00:23:33.026 "nvme_ioq_poll_period_us": 0, 00:23:33.026 "io_queue_requests": 512, 00:23:33.026 "delay_cmd_submit": true, 00:23:33.026 "transport_retry_count": 4, 00:23:33.026 "bdev_retry_count": 3, 00:23:33.026 "transport_ack_timeout": 0, 00:23:33.026 "ctrlr_loss_timeout_sec": 0, 00:23:33.026 "reconnect_delay_sec": 0, 00:23:33.026 "fast_io_fail_timeout_sec": 0, 00:23:33.026 "disable_auto_failback": false, 00:23:33.026 "generate_uuids": false, 00:23:33.026 "transport_tos": 0, 00:23:33.026 "nvme_error_stat": false, 00:23:33.026 "rdma_srq_size": 0, 00:23:33.026 "io_path_stat": false, 00:23:33.026 "allow_accel_sequence": false, 00:23:33.026 "rdma_max_cq_size": 0, 00:23:33.026 "rdma_cm_event_timeout_ms": 0, 00:23:33.026 "dhchap_digests": [ 00:23:33.026 "sha256", 00:23:33.026 "sha384", 00:23:33.026 "sha512" 00:23:33.026 ], 00:23:33.026 "dhchap_dhgroups": [ 00:23:33.026 "null", 00:23:33.026 "ffdhe2048", 00:23:33.026 "ffdhe3072", 00:23:33.026 "ffdhe4096", 00:23:33.026 "ffdhe6144", 00:23:33.026 "ffdhe8192" 00:23:33.026 ] 00:23:33.026 } 00:23:33.026 }, 00:23:33.026 { 00:23:33.026 "method": "bdev_nvme_attach_controller", 00:23:33.026 "params": { 00:23:33.026 "name": "TLSTEST", 00:23:33.026 "trtype": "TCP", 00:23:33.026 "adrfam": "IPv4", 00:23:33.026 "traddr": "10.0.0.2", 00:23:33.026 "trsvcid": "4420", 00:23:33.026 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.026 "prchk_reftag": false, 00:23:33.026 "prchk_guard": false, 00:23:33.026 "ctrlr_loss_timeout_sec": 0, 00:23:33.026 "reconnect_delay_sec": 0, 00:23:33.026 "fast_io_fail_timeout_sec": 0, 00:23:33.026 "psk": "/tmp/tmp.HUBCeDzLAC", 00:23:33.026 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:33.026 "hdgst": false, 00:23:33.026 "ddgst": false 00:23:33.026 } 00:23:33.026 }, 00:23:33.026 { 00:23:33.026 "method": "bdev_nvme_set_hotplug", 00:23:33.026 "params": { 00:23:33.026 "period_us": 100000, 00:23:33.026 "enable": false 00:23:33.026 } 00:23:33.026 }, 00:23:33.026 { 00:23:33.026 "method": "bdev_wait_for_examine" 00:23:33.026 } 00:23:33.026 ] 00:23:33.026 }, 00:23:33.026 { 00:23:33.026 "subsystem": "nbd", 00:23:33.026 "config": [] 00:23:33.026 } 00:23:33.026 ] 00:23:33.026 }' 00:23:33.026 [2024-07-25 13:51:29.834987] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:23:33.026 [2024-07-25 13:51:29.835043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid333073 ] 00:23:33.026 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.026 [2024-07-25 13:51:29.874344] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:33.026 [2024-07-25 13:51:29.905927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.285 [2024-07-25 13:51:29.944325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.285 [2024-07-25 13:51:30.082443] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:33.285 [2024-07-25 13:51:30.082530] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:33.874 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:33.874 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:33.874 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:33.874 Running I/O for 10 seconds... 00:23:46.082 00:23:46.082 Latency(us) 00:23:46.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.082 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:46.082 Verification LBA range: start 0x0 length 0x2000 00:23:46.082 TLSTESTn1 : 10.03 4619.99 18.05 0.00 0.00 27654.20 6763.32 55784.24 00:23:46.082 =================================================================================================================== 00:23:46.082 Total : 4619.99 18.05 0.00 0.00 27654.20 6763.32 55784.24 00:23:46.082 0 00:23:46.082 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:46.082 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 333073 00:23:46.082 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 333073 ']' 00:23:46.082 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 333073 00:23:46.082 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:46.082 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:46.082 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 333073 00:23:46.082 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:46.082 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:46.082 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 333073' 00:23:46.082 killing process with pid 333073 00:23:46.082 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 333073 00:23:46.082 Received shutdown signal, test time was about 10.000000 seconds 00:23:46.082 00:23:46.082 Latency(us) 00:23:46.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.082 =================================================================================================================== 00:23:46.082 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:46.082 [2024-07-25 13:51:40.837056] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:46.082 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 333073 00:23:46.082 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 332981 00:23:46.082 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 332981 ']' 00:23:46.082 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 332981 00:23:46.082 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:46.082 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:46.082 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 332981 00:23:46.082 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:46.082 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:46.082 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 332981' 00:23:46.082 killing process with pid 332981 00:23:46.082 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 332981 00:23:46.082 [2024-07-25 13:51:41.061624] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:46.082 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 332981 00:23:46.082 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:46.082 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:46.082 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:46.082 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.082 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=335061 00:23:46.082 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:46.082 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 335061 00:23:46.082 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 335061 ']' 00:23:46.082 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.082 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:46.082 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.082 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:46.082 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.082 [2024-07-25 13:51:41.300057] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:23:46.082 [2024-07-25 13:51:41.300116] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.082 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.082 [2024-07-25 13:51:41.341856] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:46.082 [2024-07-25 13:51:41.376172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.082 [2024-07-25 13:51:41.411991] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.082 [2024-07-25 13:51:41.412035] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.082 [2024-07-25 13:51:41.412044] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.082 [2024-07-25 13:51:41.412053] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.082 [2024-07-25 13:51:41.412060] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.082 [2024-07-25 13:51:41.412083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.082 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:46.082 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:46.082 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:46.082 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:46.082 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.082 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.082 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.HUBCeDzLAC 00:23:46.082 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.HUBCeDzLAC 00:23:46.082 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:46.082 [2024-07-25 13:51:42.289616] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.082 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:46.082 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:46.082 [2024-07-25 13:51:42.630495] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:46.082 [2024-07-25 13:51:42.630691] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.082 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:46.082 malloc0 00:23:46.082 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:46.342 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HUBCeDzLAC 00:23:46.342 [2024-07-25 13:51:43.140083] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:46.342 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:46.342 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=335406 00:23:46.342 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:46.342 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 335406 /var/tmp/bdevperf.sock 00:23:46.342 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 335406 ']' 00:23:46.342 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:46.342 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:46.342 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:46.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:46.342 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:46.342 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.342 [2024-07-25 13:51:43.194176] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:23:46.342 [2024-07-25 13:51:43.194231] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid335406 ] 00:23:46.342 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.605 [2024-07-25 13:51:43.231854] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:46.605 [2024-07-25 13:51:43.266055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.605 [2024-07-25 13:51:43.304356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.605 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:46.605 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:46.605 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.HUBCeDzLAC 00:23:46.866 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:46.866 [2024-07-25 13:51:43.688027] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:47.125 nvme0n1 00:23:47.125 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:47.125 Running I/O for 1 seconds... 00:23:48.096 00:23:48.096 Latency(us) 00:23:48.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.096 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:48.096 Verification LBA range: start 0x0 length 0x2000 00:23:48.096 nvme0n1 : 1.03 4295.99 16.78 0.00 0.00 29429.36 4666.16 69625.45 00:23:48.096 =================================================================================================================== 00:23:48.096 Total : 4295.99 16.78 0.00 0.00 29429.36 4666.16 69625.45 00:23:48.096 0 00:23:48.096 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 335406 00:23:48.096 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 335406 ']' 00:23:48.096 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 335406 00:23:48.096 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:48.096 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:48.096 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 335406 00:23:48.096 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:48.096 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:48.096 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 335406' 00:23:48.096 killing process with pid 335406 00:23:48.096 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 335406 00:23:48.096 Received shutdown signal, test time was about 1.000000 seconds 00:23:48.096 00:23:48.096 Latency(us) 00:23:48.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.096 =================================================================================================================== 00:23:48.096 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:48.096 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 335406 00:23:48.355 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 335061 00:23:48.355 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 335061 ']' 00:23:48.355 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 335061 00:23:48.355 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:48.355 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:48.355 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 335061 00:23:48.355 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:48.355 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:48.355 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 335061' 00:23:48.355 killing process with pid 335061 00:23:48.355 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 335061 00:23:48.355 [2024-07-25 13:51:45.190288] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:48.355 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 335061 00:23:48.614 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:23:48.614 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:48.614 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:48.614 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.614 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=335692 00:23:48.614 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:48.614 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 335692 00:23:48.614 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 335692 ']' 00:23:48.614 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.614 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:48.614 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.614 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:48.614 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.614 [2024-07-25 13:51:45.426535] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:23:48.614 [2024-07-25 13:51:45.426589] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.614 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.614 [2024-07-25 13:51:45.469242] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:48.614 [2024-07-25 13:51:45.500282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.874 [2024-07-25 13:51:45.537541] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:48.874 [2024-07-25 13:51:45.537584] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:48.874 [2024-07-25 13:51:45.537593] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:48.874 [2024-07-25 13:51:45.537602] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:48.874 [2024-07-25 13:51:45.537609] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:48.874 [2024-07-25 13:51:45.537634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.874 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:48.874 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:48.874 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:48.874 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:48.874 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.874 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:48.874 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:23:48.874 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.874 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.874 [2024-07-25 13:51:45.677577] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.874 malloc0 00:23:48.874 [2024-07-25 13:51:45.706021] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:48.874 [2024-07-25 13:51:45.719059] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:48.874 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.874 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=335804 00:23:48.874 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 335804 /var/tmp/bdevperf.sock 00:23:48.874 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:48.874 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 335804 ']' 00:23:48.874 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.874 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:48.874 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.874 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:48.874 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.133 [2024-07-25 13:51:45.791008] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:23:49.133 [2024-07-25 13:51:45.791055] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid335804 ] 00:23:49.133 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.133 [2024-07-25 13:51:45.828722] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:49.133 [2024-07-25 13:51:45.862742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.133 [2024-07-25 13:51:45.902015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.133 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:49.133 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:49.133 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.HUBCeDzLAC 00:23:49.392 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:49.651 [2024-07-25 13:51:46.307037] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:49.651 nvme0n1 00:23:49.651 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:49.651 Running I/O for 1 seconds... 00:23:51.029 00:23:51.029 Latency(us) 00:23:51.029 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.029 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:51.029 Verification LBA range: start 0x0 length 0x2000 00:23:51.029 nvme0n1 : 1.03 4388.84 17.14 0.00 0.00 28795.23 6815.74 57881.40 00:23:51.029 =================================================================================================================== 00:23:51.029 Total : 4388.84 17.14 0.00 0.00 28795.23 6815.74 57881.40 00:23:51.029 0 00:23:51.029 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:23:51.029 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.029 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.029 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.029 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:23:51.029 "subsystems": [ 00:23:51.029 { 00:23:51.029 "subsystem": "keyring", 00:23:51.029 "config": [ 00:23:51.029 { 00:23:51.029 "method": "keyring_file_add_key", 00:23:51.029 "params": { 00:23:51.029 "name": "key0", 00:23:51.029 "path": "/tmp/tmp.HUBCeDzLAC" 00:23:51.029 } 00:23:51.029 } 00:23:51.029 ] 00:23:51.029 }, 00:23:51.029 { 00:23:51.029 "subsystem": "iobuf", 00:23:51.029 "config": [ 00:23:51.029 { 00:23:51.029 "method": "iobuf_set_options", 00:23:51.029 "params": { 00:23:51.029 "small_pool_count": 8192, 00:23:51.029 "large_pool_count": 1024, 00:23:51.029 "small_bufsize": 8192, 00:23:51.029 "large_bufsize": 135168 00:23:51.029 } 00:23:51.029 } 00:23:51.029 ] 00:23:51.029 }, 00:23:51.029 { 00:23:51.029 "subsystem": "sock", 00:23:51.029 "config": [ 00:23:51.029 { 00:23:51.029 "method": "sock_set_default_impl", 00:23:51.029 "params": { 00:23:51.029 "impl_name": "posix" 00:23:51.029 } 00:23:51.029 }, 00:23:51.029 { 00:23:51.029 "method": "sock_impl_set_options", 00:23:51.029 "params": { 00:23:51.029 "impl_name": "ssl", 00:23:51.029 "recv_buf_size": 4096, 00:23:51.029 "send_buf_size": 4096, 00:23:51.029 "enable_recv_pipe": true, 00:23:51.029 "enable_quickack": false, 00:23:51.029 "enable_placement_id": 0, 00:23:51.030 "enable_zerocopy_send_server": true, 00:23:51.030 "enable_zerocopy_send_client": false, 00:23:51.030 "zerocopy_threshold": 0, 00:23:51.030 "tls_version": 0, 00:23:51.030 "enable_ktls": false 00:23:51.030 } 00:23:51.030 }, 00:23:51.030 { 00:23:51.030 "method": "sock_impl_set_options", 00:23:51.030 "params": { 00:23:51.030 "impl_name": "posix", 00:23:51.030 "recv_buf_size": 2097152, 00:23:51.030 "send_buf_size": 2097152, 00:23:51.030 "enable_recv_pipe": true, 00:23:51.030 "enable_quickack": false, 00:23:51.030 "enable_placement_id": 0, 00:23:51.030 "enable_zerocopy_send_server": true, 00:23:51.030 "enable_zerocopy_send_client": false, 00:23:51.030 "zerocopy_threshold": 0, 00:23:51.030 "tls_version": 0, 00:23:51.030 "enable_ktls": false 00:23:51.030 } 00:23:51.030 } 00:23:51.030 ] 00:23:51.030 }, 00:23:51.030 { 00:23:51.030 "subsystem": "vmd", 00:23:51.030 "config": [] 00:23:51.030 }, 00:23:51.030 { 00:23:51.030 "subsystem": "accel", 00:23:51.030 "config": [ 00:23:51.030 { 00:23:51.030 "method": "accel_set_options", 00:23:51.030 "params": { 00:23:51.030 "small_cache_size": 128, 00:23:51.030 "large_cache_size": 16, 00:23:51.030 "task_count": 2048, 00:23:51.030 "sequence_count": 2048, 00:23:51.030 "buf_count": 2048 00:23:51.030 } 00:23:51.030 } 00:23:51.030 ] 00:23:51.030 }, 00:23:51.030 { 00:23:51.030 "subsystem": "bdev", 00:23:51.030 "config": [ 00:23:51.030 { 00:23:51.030 "method": "bdev_set_options", 00:23:51.030 "params": { 00:23:51.030 "bdev_io_pool_size": 65535, 00:23:51.030 "bdev_io_cache_size": 256, 00:23:51.030 "bdev_auto_examine": true, 00:23:51.030 "iobuf_small_cache_size": 128, 00:23:51.030 "iobuf_large_cache_size": 16 00:23:51.030 } 00:23:51.030 }, 00:23:51.030 { 00:23:51.030 "method": "bdev_raid_set_options", 00:23:51.030 "params": { 00:23:51.030 "process_window_size_kb": 1024, 00:23:51.030 "process_max_bandwidth_mb_sec": 0 00:23:51.030 } 00:23:51.030 }, 00:23:51.030 { 00:23:51.030 "method": "bdev_iscsi_set_options", 00:23:51.030 "params": { 00:23:51.030 "timeout_sec": 30 00:23:51.030 } 00:23:51.030 }, 00:23:51.030 { 00:23:51.030 "method": "bdev_nvme_set_options", 00:23:51.030 "params": { 00:23:51.030 "action_on_timeout": "none", 00:23:51.030 "timeout_us": 0, 00:23:51.030 "timeout_admin_us": 0, 00:23:51.030 "keep_alive_timeout_ms": 10000, 00:23:51.030 "arbitration_burst": 0, 00:23:51.030 "low_priority_weight": 0, 00:23:51.030 "medium_priority_weight": 0, 00:23:51.030 "high_priority_weight": 0, 00:23:51.030 "nvme_adminq_poll_period_us": 10000, 00:23:51.030 "nvme_ioq_poll_period_us": 0, 00:23:51.030 "io_queue_requests": 0, 00:23:51.030 "delay_cmd_submit": true, 00:23:51.030 "transport_retry_count": 4, 00:23:51.030 "bdev_retry_count": 3, 00:23:51.030 "transport_ack_timeout": 0, 00:23:51.030 "ctrlr_loss_timeout_sec": 0, 00:23:51.030 "reconnect_delay_sec": 0, 00:23:51.030 "fast_io_fail_timeout_sec": 0, 00:23:51.030 "disable_auto_failback": false, 00:23:51.030 "generate_uuids": false, 00:23:51.030 "transport_tos": 0, 00:23:51.030 "nvme_error_stat": false, 00:23:51.030 "rdma_srq_size": 0, 00:23:51.030 "io_path_stat": false, 00:23:51.030 "allow_accel_sequence": false, 00:23:51.030 "rdma_max_cq_size": 0, 00:23:51.030 "rdma_cm_event_timeout_ms": 0, 00:23:51.030 "dhchap_digests": [ 00:23:51.030 "sha256", 00:23:51.030 "sha384", 00:23:51.030 "sha512" 00:23:51.030 ], 00:23:51.030 "dhchap_dhgroups": [ 00:23:51.030 "null", 00:23:51.030 "ffdhe2048", 00:23:51.030 "ffdhe3072", 00:23:51.030 "ffdhe4096", 00:23:51.030 "ffdhe6144", 00:23:51.030 "ffdhe8192" 00:23:51.030 ] 00:23:51.030 } 00:23:51.030 }, 00:23:51.030 { 00:23:51.030 "method": "bdev_nvme_set_hotplug", 00:23:51.030 "params": { 00:23:51.030 "period_us": 100000, 00:23:51.030 "enable": false 00:23:51.030 } 00:23:51.030 }, 00:23:51.030 { 00:23:51.030 "method": "bdev_malloc_create", 00:23:51.030 "params": { 00:23:51.030 "name": "malloc0", 00:23:51.030 "num_blocks": 8192, 00:23:51.030 "block_size": 4096, 00:23:51.030 "physical_block_size": 4096, 00:23:51.030 "uuid": "65600c7b-ef84-4d9c-b287-e82da87bf937", 00:23:51.030 "optimal_io_boundary": 0, 00:23:51.030 "md_size": 0, 00:23:51.030 "dif_type": 0, 00:23:51.030 "dif_is_head_of_md": false, 00:23:51.030 "dif_pi_format": 0 00:23:51.030 } 00:23:51.030 }, 00:23:51.030 { 00:23:51.030 "method": "bdev_wait_for_examine" 00:23:51.030 } 00:23:51.030 ] 00:23:51.030 }, 00:23:51.030 { 00:23:51.030 "subsystem": "nbd", 00:23:51.030 "config": [] 00:23:51.030 }, 00:23:51.030 { 00:23:51.030 "subsystem": "scheduler", 00:23:51.030 "config": [ 00:23:51.030 { 00:23:51.030 "method": "framework_set_scheduler", 00:23:51.030 "params": { 00:23:51.030 "name": "static" 00:23:51.030 } 00:23:51.030 } 00:23:51.030 ] 00:23:51.030 }, 00:23:51.030 { 00:23:51.030 "subsystem": "nvmf", 00:23:51.030 "config": [ 00:23:51.030 { 00:23:51.030 "method": "nvmf_set_config", 00:23:51.030 "params": { 00:23:51.030 "discovery_filter": "match_any", 00:23:51.030 "admin_cmd_passthru": { 00:23:51.030 "identify_ctrlr": false 00:23:51.030 } 00:23:51.030 } 00:23:51.030 }, 00:23:51.030 { 00:23:51.030 "method": "nvmf_set_max_subsystems", 00:23:51.030 "params": { 00:23:51.030 "max_subsystems": 1024 00:23:51.030 } 00:23:51.030 }, 00:23:51.030 { 00:23:51.030 "method": "nvmf_set_crdt", 00:23:51.030 "params": { 00:23:51.030 "crdt1": 0, 00:23:51.030 "crdt2": 0, 00:23:51.030 "crdt3": 0 00:23:51.030 } 00:23:51.030 }, 00:23:51.030 { 00:23:51.030 "method": "nvmf_create_transport", 00:23:51.030 "params": { 00:23:51.030 "trtype": "TCP", 00:23:51.030 "max_queue_depth": 128, 00:23:51.030 "max_io_qpairs_per_ctrlr": 127, 00:23:51.030 "in_capsule_data_size": 4096, 00:23:51.030 "max_io_size": 131072, 00:23:51.030 "io_unit_size": 131072, 00:23:51.030 "max_aq_depth": 128, 00:23:51.030 "num_shared_buffers": 511, 00:23:51.030 "buf_cache_size": 4294967295, 00:23:51.030 "dif_insert_or_strip": false, 00:23:51.030 "zcopy": false, 00:23:51.030 "c2h_success": false, 00:23:51.030 "sock_priority": 0, 00:23:51.030 "abort_timeout_sec": 1, 00:23:51.030 "ack_timeout": 0, 00:23:51.030 "data_wr_pool_size": 0 00:23:51.030 } 00:23:51.030 }, 00:23:51.030 { 00:23:51.030 "method": "nvmf_create_subsystem", 00:23:51.030 "params": { 00:23:51.030 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.030 "allow_any_host": false, 00:23:51.030 "serial_number": "00000000000000000000", 00:23:51.030 "model_number": "SPDK bdev Controller", 00:23:51.030 "max_namespaces": 32, 00:23:51.030 "min_cntlid": 1, 00:23:51.030 "max_cntlid": 65519, 00:23:51.030 "ana_reporting": false 00:23:51.030 } 00:23:51.030 }, 00:23:51.030 { 00:23:51.030 "method": "nvmf_subsystem_add_host", 00:23:51.030 "params": { 00:23:51.030 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.030 "host": "nqn.2016-06.io.spdk:host1", 00:23:51.030 "psk": "key0" 00:23:51.030 } 00:23:51.030 }, 00:23:51.030 { 00:23:51.030 "method": "nvmf_subsystem_add_ns", 00:23:51.030 "params": { 00:23:51.030 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.030 "namespace": { 00:23:51.030 "nsid": 1, 00:23:51.030 "bdev_name": "malloc0", 00:23:51.030 "nguid": "65600C7BEF844D9CB287E82DA87BF937", 00:23:51.030 "uuid": "65600c7b-ef84-4d9c-b287-e82da87bf937", 00:23:51.030 "no_auto_visible": false 00:23:51.030 } 00:23:51.030 } 00:23:51.030 }, 00:23:51.030 { 00:23:51.030 "method": "nvmf_subsystem_add_listener", 00:23:51.030 "params": { 00:23:51.030 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.030 "listen_address": { 00:23:51.030 "trtype": "TCP", 00:23:51.030 "adrfam": "IPv4", 00:23:51.030 "traddr": "10.0.0.2", 00:23:51.030 "trsvcid": "4420" 00:23:51.030 }, 00:23:51.030 "secure_channel": false, 00:23:51.030 "sock_impl": "ssl" 00:23:51.030 } 00:23:51.030 } 00:23:51.030 ] 00:23:51.031 } 00:23:51.031 ] 00:23:51.031 }' 00:23:51.031 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:51.031 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:23:51.031 "subsystems": [ 00:23:51.031 { 00:23:51.031 "subsystem": "keyring", 00:23:51.031 "config": [ 00:23:51.031 { 00:23:51.031 "method": "keyring_file_add_key", 00:23:51.031 "params": { 00:23:51.031 "name": "key0", 00:23:51.031 "path": "/tmp/tmp.HUBCeDzLAC" 00:23:51.031 } 00:23:51.031 } 00:23:51.031 ] 00:23:51.031 }, 00:23:51.031 { 00:23:51.031 "subsystem": "iobuf", 00:23:51.031 "config": [ 00:23:51.031 { 00:23:51.031 "method": "iobuf_set_options", 00:23:51.031 "params": { 00:23:51.031 "small_pool_count": 8192, 00:23:51.031 "large_pool_count": 1024, 00:23:51.031 "small_bufsize": 8192, 00:23:51.031 "large_bufsize": 135168 00:23:51.031 } 00:23:51.031 } 00:23:51.031 ] 00:23:51.031 }, 00:23:51.031 { 00:23:51.031 "subsystem": "sock", 00:23:51.031 "config": [ 00:23:51.031 { 00:23:51.031 "method": "sock_set_default_impl", 00:23:51.031 "params": { 00:23:51.031 "impl_name": "posix" 00:23:51.031 } 00:23:51.031 }, 00:23:51.031 { 00:23:51.031 "method": "sock_impl_set_options", 00:23:51.031 "params": { 00:23:51.031 "impl_name": "ssl", 00:23:51.031 "recv_buf_size": 4096, 00:23:51.031 "send_buf_size": 4096, 00:23:51.031 "enable_recv_pipe": true, 00:23:51.031 "enable_quickack": false, 00:23:51.031 "enable_placement_id": 0, 00:23:51.031 "enable_zerocopy_send_server": true, 00:23:51.031 "enable_zerocopy_send_client": false, 00:23:51.031 "zerocopy_threshold": 0, 00:23:51.031 "tls_version": 0, 00:23:51.031 "enable_ktls": false 00:23:51.031 } 00:23:51.031 }, 00:23:51.031 { 00:23:51.031 "method": "sock_impl_set_options", 00:23:51.031 "params": { 00:23:51.031 "impl_name": "posix", 00:23:51.031 "recv_buf_size": 2097152, 00:23:51.031 "send_buf_size": 2097152, 00:23:51.031 "enable_recv_pipe": true, 00:23:51.031 "enable_quickack": false, 00:23:51.031 "enable_placement_id": 0, 00:23:51.031 "enable_zerocopy_send_server": true, 00:23:51.031 "enable_zerocopy_send_client": false, 00:23:51.031 "zerocopy_threshold": 0, 00:23:51.031 "tls_version": 0, 00:23:51.031 "enable_ktls": false 00:23:51.031 } 00:23:51.031 } 00:23:51.031 ] 00:23:51.031 }, 00:23:51.031 { 00:23:51.031 "subsystem": "vmd", 00:23:51.031 "config": [] 00:23:51.031 }, 00:23:51.031 { 00:23:51.031 "subsystem": "accel", 00:23:51.031 "config": [ 00:23:51.031 { 00:23:51.031 "method": "accel_set_options", 00:23:51.031 "params": { 00:23:51.031 "small_cache_size": 128, 00:23:51.031 "large_cache_size": 16, 00:23:51.031 "task_count": 2048, 00:23:51.031 "sequence_count": 2048, 00:23:51.031 "buf_count": 2048 00:23:51.031 } 00:23:51.031 } 00:23:51.031 ] 00:23:51.031 }, 00:23:51.031 { 00:23:51.031 "subsystem": "bdev", 00:23:51.031 "config": [ 00:23:51.031 { 00:23:51.031 "method": "bdev_set_options", 00:23:51.031 "params": { 00:23:51.031 "bdev_io_pool_size": 65535, 00:23:51.031 "bdev_io_cache_size": 256, 00:23:51.031 "bdev_auto_examine": true, 00:23:51.031 "iobuf_small_cache_size": 128, 00:23:51.031 "iobuf_large_cache_size": 16 00:23:51.031 } 00:23:51.031 }, 00:23:51.031 { 00:23:51.031 "method": "bdev_raid_set_options", 00:23:51.031 "params": { 00:23:51.031 "process_window_size_kb": 1024, 00:23:51.031 "process_max_bandwidth_mb_sec": 0 00:23:51.031 } 00:23:51.031 }, 00:23:51.031 { 00:23:51.031 "method": "bdev_iscsi_set_options", 00:23:51.031 "params": { 00:23:51.031 "timeout_sec": 30 00:23:51.031 } 00:23:51.031 }, 00:23:51.031 { 00:23:51.031 "method": "bdev_nvme_set_options", 00:23:51.031 "params": { 00:23:51.031 "action_on_timeout": "none", 00:23:51.031 "timeout_us": 0, 00:23:51.031 "timeout_admin_us": 0, 00:23:51.031 "keep_alive_timeout_ms": 10000, 00:23:51.031 "arbitration_burst": 0, 00:23:51.031 "low_priority_weight": 0, 00:23:51.031 "medium_priority_weight": 0, 00:23:51.031 "high_priority_weight": 0, 00:23:51.031 "nvme_adminq_poll_period_us": 10000, 00:23:51.031 "nvme_ioq_poll_period_us": 0, 00:23:51.031 "io_queue_requests": 512, 00:23:51.031 "delay_cmd_submit": true, 00:23:51.031 "transport_retry_count": 4, 00:23:51.031 "bdev_retry_count": 3, 00:23:51.031 "transport_ack_timeout": 0, 00:23:51.031 "ctrlr_loss_timeout_sec": 0, 00:23:51.031 "reconnect_delay_sec": 0, 00:23:51.031 "fast_io_fail_timeout_sec": 0, 00:23:51.031 "disable_auto_failback": false, 00:23:51.031 "generate_uuids": false, 00:23:51.031 "transport_tos": 0, 00:23:51.031 "nvme_error_stat": false, 00:23:51.031 "rdma_srq_size": 0, 00:23:51.031 "io_path_stat": false, 00:23:51.031 "allow_accel_sequence": false, 00:23:51.031 "rdma_max_cq_size": 0, 00:23:51.031 "rdma_cm_event_timeout_ms": 0, 00:23:51.031 "dhchap_digests": [ 00:23:51.031 "sha256", 00:23:51.031 "sha384", 00:23:51.031 "sha512" 00:23:51.031 ], 00:23:51.031 "dhchap_dhgroups": [ 00:23:51.031 "null", 00:23:51.031 "ffdhe2048", 00:23:51.031 "ffdhe3072", 00:23:51.031 "ffdhe4096", 00:23:51.031 "ffdhe6144", 00:23:51.031 "ffdhe8192" 00:23:51.031 ] 00:23:51.031 } 00:23:51.031 }, 00:23:51.031 { 00:23:51.031 "method": "bdev_nvme_attach_controller", 00:23:51.031 "params": { 00:23:51.031 "name": "nvme0", 00:23:51.031 "trtype": "TCP", 00:23:51.031 "adrfam": "IPv4", 00:23:51.031 "traddr": "10.0.0.2", 00:23:51.031 "trsvcid": "4420", 00:23:51.031 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.031 "prchk_reftag": false, 00:23:51.031 "prchk_guard": false, 00:23:51.031 "ctrlr_loss_timeout_sec": 0, 00:23:51.031 "reconnect_delay_sec": 0, 00:23:51.031 "fast_io_fail_timeout_sec": 0, 00:23:51.031 "psk": "key0", 00:23:51.031 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:51.031 "hdgst": false, 00:23:51.031 "ddgst": false 00:23:51.031 } 00:23:51.031 }, 00:23:51.031 { 00:23:51.031 "method": "bdev_nvme_set_hotplug", 00:23:51.031 "params": { 00:23:51.031 "period_us": 100000, 00:23:51.031 "enable": false 00:23:51.031 } 00:23:51.031 }, 00:23:51.031 { 00:23:51.031 "method": "bdev_enable_histogram", 00:23:51.031 "params": { 00:23:51.031 "name": "nvme0n1", 00:23:51.031 "enable": true 00:23:51.031 } 00:23:51.031 }, 00:23:51.031 { 00:23:51.031 "method": "bdev_wait_for_examine" 00:23:51.031 } 00:23:51.031 ] 00:23:51.031 }, 00:23:51.031 { 00:23:51.031 "subsystem": "nbd", 00:23:51.031 "config": [] 00:23:51.031 } 00:23:51.031 ] 00:23:51.031 }' 00:23:51.031 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 335804 00:23:51.031 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 335804 ']' 00:23:51.031 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 335804 00:23:51.032 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:51.291 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:51.291 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 335804 00:23:51.291 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:51.291 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:51.291 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 335804' 00:23:51.291 killing process with pid 335804 00:23:51.291 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 335804 00:23:51.291 Received shutdown signal, test time was about 1.000000 seconds 00:23:51.291 00:23:51.291 Latency(us) 00:23:51.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.291 =================================================================================================================== 00:23:51.291 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:51.291 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 335804 00:23:51.291 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 335692 00:23:51.291 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 335692 ']' 00:23:51.291 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 335692 00:23:51.291 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:51.291 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:51.291 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 335692 00:23:51.550 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:51.550 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:51.550 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 335692' 00:23:51.550 killing process with pid 335692 00:23:51.550 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 335692 00:23:51.550 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 335692 00:23:51.550 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:23:51.550 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:51.550 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:51.550 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:23:51.550 "subsystems": [ 00:23:51.550 { 00:23:51.550 "subsystem": "keyring", 00:23:51.550 "config": [ 00:23:51.550 { 00:23:51.550 "method": "keyring_file_add_key", 00:23:51.550 "params": { 00:23:51.550 "name": "key0", 00:23:51.550 "path": "/tmp/tmp.HUBCeDzLAC" 00:23:51.550 } 00:23:51.550 } 00:23:51.550 ] 00:23:51.550 }, 00:23:51.550 { 00:23:51.550 "subsystem": "iobuf", 00:23:51.550 "config": [ 00:23:51.550 { 00:23:51.550 "method": "iobuf_set_options", 00:23:51.550 "params": { 00:23:51.550 "small_pool_count": 8192, 00:23:51.550 "large_pool_count": 1024, 00:23:51.550 "small_bufsize": 8192, 00:23:51.550 "large_bufsize": 135168 00:23:51.550 } 00:23:51.550 } 00:23:51.550 ] 00:23:51.550 }, 00:23:51.550 { 00:23:51.550 "subsystem": "sock", 00:23:51.550 "config": [ 00:23:51.550 { 00:23:51.550 "method": "sock_set_default_impl", 00:23:51.550 "params": { 00:23:51.550 "impl_name": "posix" 00:23:51.550 } 00:23:51.550 }, 00:23:51.550 { 00:23:51.550 "method": "sock_impl_set_options", 00:23:51.550 "params": { 00:23:51.550 "impl_name": "ssl", 00:23:51.550 "recv_buf_size": 4096, 00:23:51.550 "send_buf_size": 4096, 00:23:51.550 "enable_recv_pipe": true, 00:23:51.550 "enable_quickack": false, 00:23:51.550 "enable_placement_id": 0, 00:23:51.550 "enable_zerocopy_send_server": true, 00:23:51.550 "enable_zerocopy_send_client": false, 00:23:51.550 "zerocopy_threshold": 0, 00:23:51.550 "tls_version": 0, 00:23:51.550 "enable_ktls": false 00:23:51.550 } 00:23:51.550 }, 00:23:51.550 { 00:23:51.550 "method": "sock_impl_set_options", 00:23:51.550 "params": { 00:23:51.550 "impl_name": "posix", 00:23:51.550 "recv_buf_size": 2097152, 00:23:51.550 "send_buf_size": 2097152, 00:23:51.550 "enable_recv_pipe": true, 00:23:51.550 "enable_quickack": false, 00:23:51.550 "enable_placement_id": 0, 00:23:51.550 "enable_zerocopy_send_server": true, 00:23:51.550 "enable_zerocopy_send_client": false, 00:23:51.550 "zerocopy_threshold": 0, 00:23:51.550 "tls_version": 0, 00:23:51.550 "enable_ktls": false 00:23:51.550 } 00:23:51.550 } 00:23:51.550 ] 00:23:51.550 }, 00:23:51.550 { 00:23:51.550 "subsystem": "vmd", 00:23:51.550 "config": [] 00:23:51.550 }, 00:23:51.550 { 00:23:51.550 "subsystem": "accel", 00:23:51.550 "config": [ 00:23:51.550 { 00:23:51.550 "method": "accel_set_options", 00:23:51.550 "params": { 00:23:51.550 "small_cache_size": 128, 00:23:51.550 "large_cache_size": 16, 00:23:51.550 "task_count": 2048, 00:23:51.550 "sequence_count": 2048, 00:23:51.550 "buf_count": 2048 00:23:51.550 } 00:23:51.550 } 00:23:51.550 ] 00:23:51.550 }, 00:23:51.550 { 00:23:51.550 "subsystem": "bdev", 00:23:51.550 "config": [ 00:23:51.550 { 00:23:51.550 "method": "bdev_set_options", 00:23:51.550 "params": { 00:23:51.550 "bdev_io_pool_size": 65535, 00:23:51.550 "bdev_io_cache_size": 256, 00:23:51.550 "bdev_auto_examine": true, 00:23:51.551 "iobuf_small_cache_size": 128, 00:23:51.551 "iobuf_large_cache_size": 16 00:23:51.551 } 00:23:51.551 }, 00:23:51.551 { 00:23:51.551 "method": "bdev_raid_set_options", 00:23:51.551 "params": { 00:23:51.551 "process_window_size_kb": 1024, 00:23:51.551 "process_max_bandwidth_mb_sec": 0 00:23:51.551 } 00:23:51.551 }, 00:23:51.551 { 00:23:51.551 "method": "bdev_iscsi_set_options", 00:23:51.551 "params": { 00:23:51.551 "timeout_sec": 30 00:23:51.551 } 00:23:51.551 }, 00:23:51.551 { 00:23:51.551 "method": "bdev_nvme_set_options", 00:23:51.551 "params": { 00:23:51.551 "action_on_timeout": "none", 00:23:51.551 "timeout_us": 0, 00:23:51.551 "timeout_admin_us": 0, 00:23:51.551 "keep_alive_timeout_ms": 10000, 00:23:51.551 "arbitration_burst": 0, 00:23:51.551 "low_priority_weight": 0, 00:23:51.551 "medium_priority_weight": 0, 00:23:51.551 "high_priority_weight": 0, 00:23:51.551 "nvme_adminq_poll_period_us": 10000, 00:23:51.551 "nvme_ioq_poll_period_us": 0, 00:23:51.551 "io_queue_requests": 0, 00:23:51.551 "delay_cmd_submit": true, 00:23:51.551 "transport_retry_count": 4, 00:23:51.551 "bdev_retry_count": 3, 00:23:51.551 "transport_ack_timeout": 0, 00:23:51.551 "ctrlr_loss_timeout_sec": 0, 00:23:51.551 "reconnect_delay_sec": 0, 00:23:51.551 "fast_io_fail_timeout_sec": 0, 00:23:51.551 "disable_auto_failback": false, 00:23:51.551 "generate_uuids": false, 00:23:51.551 "transport_tos": 0, 00:23:51.551 "nvme_error_stat": false, 00:23:51.551 "rdma_srq_size": 0, 00:23:51.551 "io_path_stat": false, 00:23:51.551 "allow_accel_sequence": false, 00:23:51.551 "rdma_max_cq_size": 0, 00:23:51.551 "rdma_cm_event_timeout_ms": 0, 00:23:51.551 "dhchap_digests": [ 00:23:51.551 "sha256", 00:23:51.551 "sha384", 00:23:51.551 "sha512" 00:23:51.551 ], 00:23:51.551 "dhchap_dhgroups": [ 00:23:51.551 "null", 00:23:51.551 "ffdhe2048", 00:23:51.551 "ffdhe3072", 00:23:51.551 "ffdhe4096", 00:23:51.551 "ffdhe6144", 00:23:51.551 "ffdhe8192" 00:23:51.551 ] 00:23:51.551 } 00:23:51.551 }, 00:23:51.551 { 00:23:51.551 "method": "bdev_nvme_set_hotplug", 00:23:51.551 "params": { 00:23:51.551 "period_us": 100000, 00:23:51.551 "enable": false 00:23:51.551 } 00:23:51.551 }, 00:23:51.551 { 00:23:51.551 "method": "bdev_malloc_create", 00:23:51.551 "params": { 00:23:51.551 "name": "malloc0", 00:23:51.551 "num_blocks": 8192, 00:23:51.551 "block_size": 4096, 00:23:51.551 "physical_block_size": 4096, 00:23:51.551 "uuid": "65600c7b-ef84-4d9c-b287-e82da87bf937", 00:23:51.551 "optimal_io_boundary": 0, 00:23:51.551 "md_size": 0, 00:23:51.551 "dif_type": 0, 00:23:51.551 "dif_is_head_of_md": false, 00:23:51.551 "dif_pi_format": 0 00:23:51.551 } 00:23:51.551 }, 00:23:51.551 { 00:23:51.551 "method": "bdev_wait_for_examine" 00:23:51.551 } 00:23:51.551 ] 00:23:51.551 }, 00:23:51.551 { 00:23:51.551 "subsystem": "nbd", 00:23:51.551 "config": [] 00:23:51.551 }, 00:23:51.551 { 00:23:51.551 "subsystem": "scheduler", 00:23:51.551 "config": [ 00:23:51.551 { 00:23:51.551 "method": "framework_set_scheduler", 00:23:51.551 "params": { 00:23:51.551 "name": "static" 00:23:51.551 } 00:23:51.551 } 00:23:51.551 ] 00:23:51.551 }, 00:23:51.551 { 00:23:51.551 "subsystem": "nvmf", 00:23:51.551 "config": [ 00:23:51.551 { 00:23:51.551 "method": "nvmf_set_config", 00:23:51.551 "params": { 00:23:51.551 "discovery_filter": "match_any", 00:23:51.551 "admin_cmd_passthru": { 00:23:51.551 "identify_ctrlr": false 00:23:51.551 } 00:23:51.551 } 00:23:51.551 }, 00:23:51.551 { 00:23:51.551 "method": "nvmf_set_max_subsystems", 00:23:51.551 "params": { 00:23:51.551 "max_subsystems": 1024 00:23:51.551 } 00:23:51.551 }, 00:23:51.551 { 00:23:51.551 "method": "nvmf_set_crdt", 00:23:51.551 "params": { 00:23:51.551 "crdt1": 0, 00:23:51.551 "crdt2": 0, 00:23:51.551 "crdt3": 0 00:23:51.551 } 00:23:51.551 }, 00:23:51.551 { 00:23:51.551 "method": "nvmf_create_transport", 00:23:51.551 "params": { 00:23:51.551 "trtype": "TCP", 00:23:51.551 "max_queue_depth": 128, 00:23:51.551 "max_io_qpairs_per_ctrlr": 127, 00:23:51.551 "in_capsule_data_size": 4096, 00:23:51.551 "max_io_size": 131072, 00:23:51.551 "io_unit_size": 131072, 00:23:51.551 "max_aq_depth": 128, 00:23:51.551 "num_shared_buffers": 511, 00:23:51.551 "buf_cache_size": 4294967295, 00:23:51.551 "dif_insert_or_strip": false, 00:23:51.551 "zcopy": false, 00:23:51.551 "c2h_success": false, 00:23:51.551 "sock_priority": 0, 00:23:51.551 "abort_timeout_sec": 1, 00:23:51.551 "ack_timeout": 0, 00:23:51.551 "data_wr_pool_size": 0 00:23:51.551 } 00:23:51.551 }, 00:23:51.551 { 00:23:51.551 "method": "nvmf_create_subsystem", 00:23:51.551 "params": { 00:23:51.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.551 "allow_any_host": false, 00:23:51.551 "serial_number": "00000000000000000000", 00:23:51.551 "model_number": "SPDK bdev Controller", 00:23:51.551 "max_namespaces": 32, 00:23:51.551 "min_cntlid": 1, 00:23:51.551 "max_cntlid": 65519, 00:23:51.551 "ana_reporting": false 00:23:51.551 } 00:23:51.551 }, 00:23:51.551 { 00:23:51.551 "method": "nvmf_subsystem_add_host", 00:23:51.551 "params": { 00:23:51.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.551 "host": "nqn.2016-06.io.spdk:host1", 00:23:51.551 "psk": "key0" 00:23:51.551 } 00:23:51.551 }, 00:23:51.551 { 00:23:51.551 "method": "nvmf_subsystem_add_ns", 00:23:51.551 "params": { 00:23:51.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.551 "namespace": { 00:23:51.551 "nsid": 1, 00:23:51.551 "bdev_name": "malloc0", 00:23:51.551 "nguid": "65600C7BEF844D9CB287E82DA87BF937", 00:23:51.551 "uuid": "65600c7b-ef84-4d9c-b287-e82da87bf937", 00:23:51.551 "no_auto_visible": false 00:23:51.551 } 00:23:51.551 } 00:23:51.551 }, 00:23:51.551 { 00:23:51.551 "method": "nvmf_subsystem_add_listener", 00:23:51.551 "params": { 00:23:51.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.551 "listen_address": { 00:23:51.551 "trtype": "TCP", 00:23:51.551 "adrfam": "IPv4", 00:23:51.551 "traddr": "10.0.0.2", 00:23:51.551 "trsvcid": "4420" 00:23:51.551 }, 00:23:51.551 "secure_channel": false, 00:23:51.551 "sock_impl": "ssl" 00:23:51.551 } 00:23:51.551 } 00:23:51.551 ] 00:23:51.551 } 00:23:51.551 ] 00:23:51.551 }' 00:23:51.551 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.551 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=336249 00:23:51.551 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:51.551 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 336249 00:23:51.551 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 336249 ']' 00:23:51.552 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.552 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:51.552 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.552 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:51.552 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.552 [2024-07-25 13:51:48.432709] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:23:51.552 [2024-07-25 13:51:48.432788] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.811 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.811 [2024-07-25 13:51:48.474495] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:51.811 [2024-07-25 13:51:48.509221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.811 [2024-07-25 13:51:48.544856] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.811 [2024-07-25 13:51:48.544900] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.811 [2024-07-25 13:51:48.544909] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.811 [2024-07-25 13:51:48.544917] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.811 [2024-07-25 13:51:48.544924] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.811 [2024-07-25 13:51:48.544981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.070 [2024-07-25 13:51:48.750316] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.070 [2024-07-25 13:51:48.796328] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:52.070 [2024-07-25 13:51:48.796516] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.639 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:52.639 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:52.639 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:52.639 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:52.639 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.640 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.640 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=336526 00:23:52.640 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:52.640 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 336526 /var/tmp/bdevperf.sock 00:23:52.640 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 336526 ']' 00:23:52.640 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:52.640 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:52.640 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:23:52.640 "subsystems": [ 00:23:52.640 { 00:23:52.640 "subsystem": "keyring", 00:23:52.640 "config": [ 00:23:52.640 { 00:23:52.640 "method": "keyring_file_add_key", 00:23:52.640 "params": { 00:23:52.640 "name": "key0", 00:23:52.640 "path": "/tmp/tmp.HUBCeDzLAC" 00:23:52.640 } 00:23:52.640 } 00:23:52.640 ] 00:23:52.640 }, 00:23:52.640 { 00:23:52.640 "subsystem": "iobuf", 00:23:52.640 "config": [ 00:23:52.640 { 00:23:52.640 "method": "iobuf_set_options", 00:23:52.640 "params": { 00:23:52.640 "small_pool_count": 8192, 00:23:52.640 "large_pool_count": 1024, 00:23:52.640 "small_bufsize": 8192, 00:23:52.640 "large_bufsize": 135168 00:23:52.640 } 00:23:52.640 } 00:23:52.640 ] 00:23:52.640 }, 00:23:52.640 { 00:23:52.640 "subsystem": "sock", 00:23:52.640 "config": [ 00:23:52.640 { 00:23:52.640 "method": "sock_set_default_impl", 00:23:52.640 "params": { 00:23:52.640 "impl_name": "posix" 00:23:52.640 } 00:23:52.640 }, 00:23:52.640 { 00:23:52.640 "method": "sock_impl_set_options", 00:23:52.640 "params": { 00:23:52.640 "impl_name": "ssl", 00:23:52.640 "recv_buf_size": 4096, 00:23:52.640 "send_buf_size": 4096, 00:23:52.640 "enable_recv_pipe": true, 00:23:52.640 "enable_quickack": false, 00:23:52.640 "enable_placement_id": 0, 00:23:52.640 "enable_zerocopy_send_server": true, 00:23:52.640 "enable_zerocopy_send_client": false, 00:23:52.640 "zerocopy_threshold": 0, 00:23:52.640 "tls_version": 0, 00:23:52.640 "enable_ktls": false 00:23:52.640 } 00:23:52.640 }, 00:23:52.640 { 00:23:52.640 "method": "sock_impl_set_options", 00:23:52.640 "params": { 00:23:52.640 "impl_name": "posix", 00:23:52.640 "recv_buf_size": 2097152, 00:23:52.640 "send_buf_size": 2097152, 00:23:52.640 "enable_recv_pipe": true, 00:23:52.640 "enable_quickack": false, 00:23:52.640 "enable_placement_id": 0, 00:23:52.640 "enable_zerocopy_send_server": true, 00:23:52.640 "enable_zerocopy_send_client": false, 00:23:52.640 "zerocopy_threshold": 0, 00:23:52.640 "tls_version": 0, 00:23:52.640 "enable_ktls": false 00:23:52.640 } 00:23:52.640 } 00:23:52.640 ] 00:23:52.640 }, 00:23:52.640 { 00:23:52.640 "subsystem": "vmd", 00:23:52.640 "config": [] 00:23:52.640 }, 00:23:52.640 { 00:23:52.640 "subsystem": "accel", 00:23:52.640 "config": [ 00:23:52.640 { 00:23:52.640 "method": "accel_set_options", 00:23:52.640 "params": { 00:23:52.640 "small_cache_size": 128, 00:23:52.640 "large_cache_size": 16, 00:23:52.640 "task_count": 2048, 00:23:52.640 "sequence_count": 2048, 00:23:52.640 "buf_count": 2048 00:23:52.640 } 00:23:52.640 } 00:23:52.640 ] 00:23:52.640 }, 00:23:52.640 { 00:23:52.640 "subsystem": "bdev", 00:23:52.640 "config": [ 00:23:52.640 { 00:23:52.640 "method": "bdev_set_options", 00:23:52.640 "params": { 00:23:52.640 "bdev_io_pool_size": 65535, 00:23:52.640 "bdev_io_cache_size": 256, 00:23:52.640 "bdev_auto_examine": true, 00:23:52.640 "iobuf_small_cache_size": 128, 00:23:52.640 "iobuf_large_cache_size": 16 00:23:52.640 } 00:23:52.640 }, 00:23:52.640 { 00:23:52.640 "method": "bdev_raid_set_options", 00:23:52.640 "params": { 00:23:52.640 "process_window_size_kb": 1024, 00:23:52.640 "process_max_bandwidth_mb_sec": 0 00:23:52.640 } 00:23:52.640 }, 00:23:52.640 { 00:23:52.640 "method": "bdev_iscsi_set_options", 00:23:52.640 "params": { 00:23:52.640 "timeout_sec": 30 00:23:52.640 } 00:23:52.640 }, 00:23:52.640 { 00:23:52.640 "method": "bdev_nvme_set_options", 00:23:52.640 "params": { 00:23:52.640 "action_on_timeout": "none", 00:23:52.640 "timeout_us": 0, 00:23:52.640 "timeout_admin_us": 0, 00:23:52.640 "keep_alive_timeout_ms": 10000, 00:23:52.640 "arbitration_burst": 0, 00:23:52.640 "low_priority_weight": 0, 00:23:52.640 "medium_priority_weight": 0, 00:23:52.640 "high_priority_weight": 0, 00:23:52.640 "nvme_adminq_poll_period_us": 10000, 00:23:52.640 "nvme_ioq_poll_period_us": 0, 00:23:52.640 "io_queue_requests": 512, 00:23:52.640 "delay_cmd_submit": true, 00:23:52.640 "transport_retry_count": 4, 00:23:52.640 "bdev_retry_count": 3, 00:23:52.640 "transport_ack_timeout": 0, 00:23:52.640 "ctrlr_loss_timeout_sec": 0, 00:23:52.640 "reconnect_delay_sec": 0, 00:23:52.640 "fast_io_fail_timeout_sec": 0, 00:23:52.640 "disable_auto_failback": false, 00:23:52.640 "generate_uuids": false, 00:23:52.640 "transport_tos": 0, 00:23:52.640 "nvme_error_stat": false, 00:23:52.640 "rdma_srq_size": 0, 00:23:52.640 "io_path_stat": false, 00:23:52.640 "allow_accel_sequence": false, 00:23:52.640 "rdma_max_cq_size": 0, 00:23:52.640 "rdma_cm_event_timeout_ms": 0, 00:23:52.640 "dhchap_digests": [ 00:23:52.640 "sha256", 00:23:52.640 "sha384", 00:23:52.640 "sha512" 00:23:52.640 ], 00:23:52.640 "dhchap_dhgroups": [ 00:23:52.640 "null", 00:23:52.640 "ffdhe2048", 00:23:52.640 "ffdhe3072", 00:23:52.640 "ffdhe4096", 00:23:52.640 "ffdhe6144", 00:23:52.640 "ffdhe8192" 00:23:52.640 ] 00:23:52.640 } 00:23:52.640 }, 00:23:52.640 { 00:23:52.640 "method": "bdev_nvme_attach_controller", 00:23:52.640 "params": { 00:23:52.640 "name": "nvme0", 00:23:52.640 "trtype": "TCP", 00:23:52.640 "adrfam": "IPv4", 00:23:52.640 "traddr": "10.0.0.2", 00:23:52.640 "trsvcid": "4420", 00:23:52.640 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.640 "prchk_reftag": false, 00:23:52.640 "prchk_guard": false, 00:23:52.640 "ctrlr_loss_timeout_sec": 0, 00:23:52.640 "reconnect_delay_sec": 0, 00:23:52.640 "fast_io_fail_timeout_sec": 0, 00:23:52.640 "psk": "key0", 00:23:52.640 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:52.640 "hdgst": false, 00:23:52.640 "ddgst": false 00:23:52.640 } 00:23:52.640 }, 00:23:52.640 { 00:23:52.640 "method": "bdev_nvme_set_hotplug", 00:23:52.640 "params": { 00:23:52.640 "period_us": 100000, 00:23:52.640 "enable": false 00:23:52.640 } 00:23:52.640 }, 00:23:52.640 { 00:23:52.640 "method": "bdev_enable_histogram", 00:23:52.640 "params": { 00:23:52.640 "name": "nvme0n1", 00:23:52.640 "enable": true 00:23:52.640 } 00:23:52.640 }, 00:23:52.640 { 00:23:52.640 "method": "bdev_wait_for_examine" 00:23:52.640 } 00:23:52.641 ] 00:23:52.641 }, 00:23:52.641 { 00:23:52.641 "subsystem": "nbd", 00:23:52.641 "config": [] 00:23:52.641 } 00:23:52.641 ] 00:23:52.641 }' 00:23:52.641 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:52.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:52.641 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:52.641 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.641 [2024-07-25 13:51:49.307939] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:23:52.641 [2024-07-25 13:51:49.307992] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid336526 ] 00:23:52.641 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.641 [2024-07-25 13:51:49.345022] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:52.641 [2024-07-25 13:51:49.380116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.641 [2024-07-25 13:51:49.418389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.900 [2024-07-25 13:51:49.563672] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:53.469 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:53.469 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:53.469 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:53.469 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:23:53.469 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.469 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:53.730 Running I/O for 1 seconds... 00:23:54.668 00:23:54.668 Latency(us) 00:23:54.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.668 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:54.668 Verification LBA range: start 0x0 length 0x2000 00:23:54.668 nvme0n1 : 1.03 4433.23 17.32 0.00 0.00 28525.13 6684.67 54525.95 00:23:54.668 =================================================================================================================== 00:23:54.668 Total : 4433.23 17.32 0.00 0.00 28525.13 6684.67 54525.95 00:23:54.668 0 00:23:54.668 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:23:54.668 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:23:54.668 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:54.668 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:23:54.668 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:23:54.668 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:23:54.668 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:54.668 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:23:54.668 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:23:54.668 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:23:54.668 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:54.668 nvmf_trace.0 00:23:54.668 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:23:54.668 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 336526 00:23:54.668 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 336526 ']' 00:23:54.668 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 336526 00:23:54.668 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:54.668 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:54.668 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 336526 00:23:54.928 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:54.928 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:54.928 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 336526' 00:23:54.928 killing process with pid 336526 00:23:54.928 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 336526 00:23:54.928 Received shutdown signal, test time was about 1.000000 seconds 00:23:54.928 00:23:54.928 Latency(us) 00:23:54.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.928 =================================================================================================================== 00:23:54.928 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:54.928 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 336526 00:23:54.928 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:54.928 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:54.928 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:54.928 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:54.928 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:54.928 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:54.928 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:54.928 rmmod nvme_tcp 00:23:54.928 rmmod nvme_fabrics 00:23:54.928 rmmod nvme_keyring 00:23:54.928 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:54.928 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:54.928 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:54.928 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 336249 ']' 00:23:54.928 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 336249 00:23:54.928 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 336249 ']' 00:23:54.928 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 336249 00:23:54.928 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:54.928 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:54.928 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 336249 00:23:55.188 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:55.188 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:55.188 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 336249' 00:23:55.188 killing process with pid 336249 00:23:55.188 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 336249 00:23:55.188 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 336249 00:23:55.188 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:55.188 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:55.188 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:55.188 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:55.188 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:55.188 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.188 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.188 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.728 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:57.728 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.BSvDb5G76M /tmp/tmp.ip27oFTH8Y /tmp/tmp.HUBCeDzLAC 00:23:57.728 00:23:57.728 real 1m20.002s 00:23:57.728 user 1m52.976s 00:23:57.728 sys 0m34.910s 00:23:57.728 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:57.728 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.728 ************************************ 00:23:57.728 END TEST nvmf_tls 00:23:57.728 ************************************ 00:23:57.728 13:51:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:57.728 13:51:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:57.728 13:51:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:57.728 13:51:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:57.728 ************************************ 00:23:57.728 START TEST nvmf_fips 00:23:57.728 ************************************ 00:23:57.728 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:57.728 * Looking for test storage... 00:23:57.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:57.728 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:57.728 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:57.728 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.728 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.728 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.728 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.728 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.728 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.728 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:57.729 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:23:57.730 Error setting digest 00:23:57.730 0042D8EE2C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:57.730 0042D8EE2C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:57.730 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:04.302 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:04.302 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:04.302 Found net devices under 0000:af:00.0: cvl_0_0 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:04.302 Found net devices under 0000:af:00.1: cvl_0_1 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:04.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:04.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:24:04.302 00:24:04.302 --- 10.0.0.2 ping statistics --- 00:24:04.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.302 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:24:04.302 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:04.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:04.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:24:04.302 00:24:04.302 --- 10.0.0.1 ping statistics --- 00:24:04.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.303 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:24:04.303 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:04.303 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:24:04.303 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:04.303 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:04.303 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:04.303 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:04.303 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:04.303 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:04.303 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:04.303 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:24:04.303 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:04.303 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:04.303 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:04.303 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=340514 00:24:04.303 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 340514 00:24:04.303 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 340514 ']' 00:24:04.303 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.303 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:04.303 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.303 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:04.303 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:04.303 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:04.303 [2024-07-25 13:52:00.956557] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:24:04.303 [2024-07-25 13:52:00.956610] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.303 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.303 [2024-07-25 13:52:00.996746] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:04.303 [2024-07-25 13:52:01.031213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.303 [2024-07-25 13:52:01.069176] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.303 [2024-07-25 13:52:01.069215] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.303 [2024-07-25 13:52:01.069225] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:04.303 [2024-07-25 13:52:01.069234] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:04.303 [2024-07-25 13:52:01.069241] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.303 [2024-07-25 13:52:01.069261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.871 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:04.871 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:04.871 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:04.871 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:04.871 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:05.130 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.130 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:24:05.130 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:05.130 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:05.130 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:05.130 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:05.130 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:05.130 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:05.130 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:05.130 [2024-07-25 13:52:01.935435] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.130 [2024-07-25 13:52:01.951440] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:05.130 [2024-07-25 13:52:01.951596] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.130 [2024-07-25 13:52:01.979582] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:05.130 malloc0 00:24:05.130 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:05.130 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=340794 00:24:05.130 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:05.130 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 340794 /var/tmp/bdevperf.sock 00:24:05.130 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 340794 ']' 00:24:05.131 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:05.131 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:05.131 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:05.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:05.131 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:05.131 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:05.390 [2024-07-25 13:52:02.062358] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:24:05.390 [2024-07-25 13:52:02.062415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid340794 ] 00:24:05.390 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.390 [2024-07-25 13:52:02.097584] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:05.390 [2024-07-25 13:52:02.129095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.390 [2024-07-25 13:52:02.167122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.958 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:05.958 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:05.958 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:06.217 [2024-07-25 13:52:02.979971] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:06.217 [2024-07-25 13:52:02.980077] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:06.217 TLSTESTn1 00:24:06.217 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:06.476 Running I/O for 10 seconds... 00:24:16.487 00:24:16.488 Latency(us) 00:24:16.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.488 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:16.488 Verification LBA range: start 0x0 length 0x2000 00:24:16.488 TLSTESTn1 : 10.03 4598.05 17.96 0.00 0.00 27787.36 6973.03 75078.04 00:24:16.488 =================================================================================================================== 00:24:16.488 Total : 4598.05 17.96 0.00 0.00 27787.36 6973.03 75078.04 00:24:16.488 0 00:24:16.488 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:16.488 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:16.488 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:24:16.488 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:24:16.488 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:16.488 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:16.488 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:16.488 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:16.488 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:16.488 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:16.488 nvmf_trace.0 00:24:16.488 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:24:16.488 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 340794 00:24:16.488 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 340794 ']' 00:24:16.488 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 340794 00:24:16.488 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:24:16.488 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:16.488 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 340794 00:24:16.488 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:16.488 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:16.488 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 340794' 00:24:16.488 killing process with pid 340794 00:24:16.488 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 340794 00:24:16.488 Received shutdown signal, test time was about 10.000000 seconds 00:24:16.488 00:24:16.488 Latency(us) 00:24:16.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.488 =================================================================================================================== 00:24:16.488 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:16.488 [2024-07-25 13:52:13.360023] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:16.488 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 340794 00:24:16.747 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:16.747 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:16.747 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:24:16.747 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:16.747 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:24:16.747 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:16.747 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:16.747 rmmod nvme_tcp 00:24:16.747 rmmod nvme_fabrics 00:24:16.747 rmmod nvme_keyring 00:24:16.747 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:16.747 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:24:16.747 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:24:16.747 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 340514 ']' 00:24:16.747 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 340514 00:24:16.747 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 340514 ']' 00:24:16.747 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 340514 00:24:16.747 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:24:16.747 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:16.747 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 340514 00:24:17.005 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:17.005 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:17.005 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 340514' 00:24:17.005 killing process with pid 340514 00:24:17.005 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 340514 00:24:17.005 [2024-07-25 13:52:13.664495] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:17.005 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 340514 00:24:17.005 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:17.005 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:17.005 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:17.005 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:17.005 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:17.005 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.005 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.005 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.537 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:19.537 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:19.537 00:24:19.537 real 0m21.715s 00:24:19.537 user 0m21.513s 00:24:19.537 sys 0m10.862s 00:24:19.537 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:19.537 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:19.537 ************************************ 00:24:19.537 END TEST nvmf_fips 00:24:19.537 ************************************ 00:24:19.537 13:52:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:24:19.537 13:52:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:19.537 13:52:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:19.537 13:52:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:19.537 13:52:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:19.537 ************************************ 00:24:19.537 START TEST nvmf_fuzz 00:24:19.537 ************************************ 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:19.537 * Looking for test storage... 00:24:19.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:24:19.537 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:26.108 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:26.108 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:26.108 Found net devices under 0000:af:00.0: cvl_0_0 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:26.108 Found net devices under 0000:af:00.1: cvl_0_1 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:26.108 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:26.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:26.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:24:26.109 00:24:26.109 --- 10.0.0.2 ping statistics --- 00:24:26.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.109 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:26.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:26.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:24:26.109 00:24:26.109 --- 10.0.0.1 ping statistics --- 00:24:26.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.109 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=346320 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 346320 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 346320 ']' 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:26.109 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:27.054 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:27.054 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:24:27.054 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:27.054 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.054 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:27.054 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.054 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:27.054 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.054 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:27.054 Malloc0 00:24:27.054 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.054 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:27.054 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.054 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:27.054 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.054 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:27.054 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.054 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:27.054 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.054 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:27.054 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.054 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:27.054 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.054 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:27.054 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:59.137 Fuzzing completed. Shutting down the fuzz application 00:24:59.137 00:24:59.137 Dumping successful admin opcodes: 00:24:59.137 8, 9, 10, 24, 00:24:59.137 Dumping successful io opcodes: 00:24:59.137 0, 9, 00:24:59.137 NS: 0x200003aeff00 I/O qp, Total commands completed: 761286, total successful commands: 4435, random_seed: 2851289024 00:24:59.137 NS: 0x200003aeff00 admin qp, Total commands completed: 87241, total successful commands: 697, random_seed: 3833316544 00:24:59.137 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:59.137 Fuzzing completed. Shutting down the fuzz application 00:24:59.137 00:24:59.137 Dumping successful admin opcodes: 00:24:59.137 24, 00:24:59.137 Dumping successful io opcodes: 00:24:59.137 00:24:59.137 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3848455602 00:24:59.137 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3848544542 00:24:59.137 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:59.137 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.137 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:59.137 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.137 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:59.137 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:59.137 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:59.137 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:59.137 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:59.137 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:59.137 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:59.137 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:59.137 rmmod nvme_tcp 00:24:59.137 rmmod nvme_fabrics 00:24:59.137 rmmod nvme_keyring 00:24:59.137 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:59.137 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:59.137 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:59.137 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 346320 ']' 00:24:59.137 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 346320 00:24:59.137 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 346320 ']' 00:24:59.137 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 346320 00:24:59.137 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:24:59.137 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:59.137 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 346320 00:24:59.137 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:59.137 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:59.137 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 346320' 00:24:59.137 killing process with pid 346320 00:24:59.137 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 346320 00:24:59.137 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 346320 00:24:59.397 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:59.397 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:59.397 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:59.397 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:59.397 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:59.397 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.397 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.397 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.343 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:01.344 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:01.344 00:25:01.344 real 0m42.151s 00:25:01.344 user 0m51.541s 00:25:01.344 sys 0m20.304s 00:25:01.344 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:01.344 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:01.344 ************************************ 00:25:01.344 END TEST nvmf_fuzz 00:25:01.344 ************************************ 00:25:01.344 13:52:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:01.344 13:52:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:01.344 13:52:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:01.344 13:52:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:01.344 ************************************ 00:25:01.344 START TEST nvmf_multiconnection 00:25:01.344 ************************************ 00:25:01.344 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:01.603 * Looking for test storage... 00:25:01.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:01.603 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:01.603 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:01.603 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:01.603 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:01.603 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:01.603 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:01.603 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:01.603 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:01.603 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:25:01.604 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.177 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:08.177 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:25:08.177 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:08.178 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:08.178 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:08.178 Found net devices under 0000:af:00.0: cvl_0_0 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:08.178 Found net devices under 0000:af:00.1: cvl_0_1 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:08.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:25:08.178 00:25:08.178 --- 10.0.0.2 ping statistics --- 00:25:08.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.178 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:25:08.178 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:08.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:25:08.178 00:25:08.178 --- 10.0.0.1 ping statistics --- 00:25:08.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.178 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:25:08.179 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.179 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:25:08.179 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:08.179 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.179 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:08.179 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:08.179 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.179 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:08.179 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:08.179 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:08.179 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:08.179 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:08.179 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.179 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=355432 00:25:08.179 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:08.179 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 355432 00:25:08.179 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 355432 ']' 00:25:08.179 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.179 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:08.179 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.179 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:08.179 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.437 [2024-07-25 13:53:05.078343] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:25:08.437 [2024-07-25 13:53:05.078393] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.437 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.437 [2024-07-25 13:53:05.118598] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:08.437 [2024-07-25 13:53:05.153451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:08.437 [2024-07-25 13:53:05.194745] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.437 [2024-07-25 13:53:05.194786] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.437 [2024-07-25 13:53:05.194795] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.437 [2024-07-25 13:53:05.194804] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.437 [2024-07-25 13:53:05.194811] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.437 [2024-07-25 13:53:05.194853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.437 [2024-07-25 13:53:05.194948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:08.437 [2024-07-25 13:53:05.195035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:08.437 [2024-07-25 13:53:05.195037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.005 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:09.005 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:25:09.005 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:09.005 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:09.005 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.265 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:09.265 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:09.265 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.265 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.265 [2024-07-25 13:53:05.935061] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:09.265 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.265 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:09.265 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.265 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:09.265 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.265 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.265 Malloc1 00:25:09.265 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.265 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:09.265 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.265 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.265 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.265 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:09.265 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.265 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.265 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.265 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:09.265 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.265 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.265 [2024-07-25 13:53:05.997775] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:09.265 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.265 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.265 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:09.265 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.265 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.265 Malloc2 00:25:09.265 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.265 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:09.265 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.265 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.265 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.265 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:09.265 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.265 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.265 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.265 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:09.265 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.265 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.265 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.265 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.265 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:09.265 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.265 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.265 Malloc3 00:25:09.265 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.265 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:09.265 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.265 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.265 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.266 Malloc4 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.266 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.526 Malloc5 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.526 Malloc6 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.526 Malloc7 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.526 Malloc8 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.526 Malloc9 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.526 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:09.527 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.527 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.527 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.527 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:09.527 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.527 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.527 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.527 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.527 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:09.527 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.527 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.527 Malloc10 00:25:09.527 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.527 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:09.527 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.527 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.527 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.527 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:09.527 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.527 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.786 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.786 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:09.786 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.786 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.786 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.786 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.786 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:09.786 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.786 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.786 Malloc11 00:25:09.786 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.786 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:09.786 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.786 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.786 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.786 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:09.786 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.786 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.786 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.786 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:09.786 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.786 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.786 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.786 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:09.786 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.786 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:11.163 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:11.163 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:11.163 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:11.163 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:11.163 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:13.068 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:13.068 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:13.068 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:25:13.068 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:13.068 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:13.068 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:13.068 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.068 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:14.443 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:14.443 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:14.443 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:14.443 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:14.443 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:16.348 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:16.349 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:16.349 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:25:16.349 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:16.349 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:16.349 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:16.349 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.349 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:17.727 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:17.727 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:17.727 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:17.727 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:17.727 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:19.632 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:19.632 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:19.632 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:25:19.632 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:19.632 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:19.632 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:19.632 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:19.632 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:21.012 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:21.012 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:21.012 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:21.012 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:21.012 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:22.924 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:23.217 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:23.217 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:25:23.217 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:23.217 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:23.218 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:23.218 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.218 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:24.597 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:24.597 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:24.597 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:24.597 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:24.597 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:26.503 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:26.503 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:26.503 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:25:26.503 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:26.503 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:26.503 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:26.503 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:26.503 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:27.880 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:27.880 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:27.880 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:27.880 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:27.880 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:30.414 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:30.414 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:30.414 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:25:30.414 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:30.414 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:30.414 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:30.414 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.414 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:31.352 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:31.352 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:31.352 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:31.352 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:31.352 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:33.886 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:33.886 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:33.886 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:25:33.886 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:33.886 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:33.886 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:33.886 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.886 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:34.822 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:34.822 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:34.822 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:34.822 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:34.822 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:37.367 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:37.367 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:37.367 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:25:37.367 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:37.367 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:37.367 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:37.367 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:37.367 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:38.745 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:38.745 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:38.745 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:38.745 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:38.745 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:40.651 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:40.651 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:40.651 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:25:40.651 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:40.651 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:40.651 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:40.651 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:40.651 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:42.061 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:42.061 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:42.061 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:42.061 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:42.061 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:44.599 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:44.599 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:44.599 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:25:44.599 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:44.599 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:44.599 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:44.599 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.599 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:45.977 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:45.977 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:45.977 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:45.977 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:45.977 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:47.877 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:47.877 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:47.877 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:25:48.136 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:48.136 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:48.136 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:48.136 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:48.136 [global] 00:25:48.136 thread=1 00:25:48.136 invalidate=1 00:25:48.136 rw=read 00:25:48.136 time_based=1 00:25:48.136 runtime=10 00:25:48.136 ioengine=libaio 00:25:48.136 direct=1 00:25:48.136 bs=262144 00:25:48.136 iodepth=64 00:25:48.136 norandommap=1 00:25:48.136 numjobs=1 00:25:48.136 00:25:48.136 [job0] 00:25:48.136 filename=/dev/nvme0n1 00:25:48.136 [job1] 00:25:48.136 filename=/dev/nvme10n1 00:25:48.136 [job2] 00:25:48.136 filename=/dev/nvme1n1 00:25:48.136 [job3] 00:25:48.136 filename=/dev/nvme2n1 00:25:48.136 [job4] 00:25:48.136 filename=/dev/nvme3n1 00:25:48.136 [job5] 00:25:48.136 filename=/dev/nvme4n1 00:25:48.136 [job6] 00:25:48.136 filename=/dev/nvme5n1 00:25:48.136 [job7] 00:25:48.136 filename=/dev/nvme6n1 00:25:48.136 [job8] 00:25:48.136 filename=/dev/nvme7n1 00:25:48.136 [job9] 00:25:48.136 filename=/dev/nvme8n1 00:25:48.136 [job10] 00:25:48.136 filename=/dev/nvme9n1 00:25:48.409 Could not set queue depth (nvme0n1) 00:25:48.409 Could not set queue depth (nvme10n1) 00:25:48.409 Could not set queue depth (nvme1n1) 00:25:48.409 Could not set queue depth (nvme2n1) 00:25:48.409 Could not set queue depth (nvme3n1) 00:25:48.409 Could not set queue depth (nvme4n1) 00:25:48.409 Could not set queue depth (nvme5n1) 00:25:48.409 Could not set queue depth (nvme6n1) 00:25:48.409 Could not set queue depth (nvme7n1) 00:25:48.409 Could not set queue depth (nvme8n1) 00:25:48.409 Could not set queue depth (nvme9n1) 00:25:48.668 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.668 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.668 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.668 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.668 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.668 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.668 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.668 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.668 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.668 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.668 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.668 fio-3.35 00:25:48.668 Starting 11 threads 00:26:00.884 00:26:00.884 job0: (groupid=0, jobs=1): err= 0: pid=362250: Thu Jul 25 13:53:55 2024 00:26:00.884 read: IOPS=861, BW=215MiB/s (226MB/s)(2167MiB/10063msec) 00:26:00.884 slat (usec): min=11, max=113334, avg=948.37, stdev=3586.55 00:26:00.884 clat (msec): min=2, max=259, avg=73.24, stdev=41.98 00:26:00.884 lat (msec): min=2, max=262, avg=74.18, stdev=42.58 00:26:00.884 clat percentiles (msec): 00:26:00.884 | 1.00th=[ 7], 5.00th=[ 15], 10.00th=[ 27], 20.00th=[ 42], 00:26:00.884 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 61], 60.00th=[ 74], 00:26:00.884 | 70.00th=[ 90], 80.00th=[ 112], 90.00th=[ 136], 95.00th=[ 153], 00:26:00.884 | 99.00th=[ 192], 99.50th=[ 203], 99.90th=[ 215], 99.95th=[ 222], 00:26:00.884 | 99.99th=[ 259] 00:26:00.884 bw ( KiB/s): min=98304, max=418304, per=10.01%, avg=220262.40, stdev=86391.74, samples=20 00:26:00.884 iops : min= 384, max= 1634, avg=860.40, stdev=337.47, samples=20 00:26:00.884 lat (msec) : 4=0.25%, 10=2.26%, 20=4.50%, 50=28.04%, 100=39.79% 00:26:00.884 lat (msec) : 250=25.13%, 500=0.02% 00:26:00.884 cpu : usr=0.39%, sys=3.68%, ctx=2137, majf=0, minf=4097 00:26:00.884 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:00.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.884 issued rwts: total=8667,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.884 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.884 job1: (groupid=0, jobs=1): err= 0: pid=362251: Thu Jul 25 13:53:55 2024 00:26:00.884 read: IOPS=656, BW=164MiB/s (172MB/s)(1653MiB/10071msec) 00:26:00.884 slat (usec): min=9, max=94868, avg=1295.48, stdev=4033.10 00:26:00.884 clat (msec): min=3, max=214, avg=96.04, stdev=41.30 00:26:00.884 lat (msec): min=3, max=286, avg=97.33, stdev=41.97 00:26:00.884 clat percentiles (msec): 00:26:00.884 | 1.00th=[ 13], 5.00th=[ 29], 10.00th=[ 37], 20.00th=[ 58], 00:26:00.884 | 30.00th=[ 74], 40.00th=[ 84], 50.00th=[ 97], 60.00th=[ 109], 00:26:00.884 | 70.00th=[ 121], 80.00th=[ 132], 90.00th=[ 153], 95.00th=[ 159], 00:26:00.884 | 99.00th=[ 192], 99.50th=[ 197], 99.90th=[ 205], 99.95th=[ 211], 00:26:00.884 | 99.99th=[ 215] 00:26:00.884 bw ( KiB/s): min=98816, max=358400, per=7.62%, avg=167628.80, stdev=62931.93, samples=20 00:26:00.884 iops : min= 386, max= 1400, avg=654.80, stdev=245.83, samples=20 00:26:00.884 lat (msec) : 4=0.15%, 10=0.53%, 20=1.10%, 50=13.90%, 100=36.39% 00:26:00.884 lat (msec) : 250=47.92% 00:26:00.884 cpu : usr=0.39%, sys=2.73%, ctx=1661, majf=0, minf=3222 00:26:00.884 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:26:00.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.884 issued rwts: total=6611,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.884 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.884 job2: (groupid=0, jobs=1): err= 0: pid=362252: Thu Jul 25 13:53:55 2024 00:26:00.884 read: IOPS=1061, BW=265MiB/s (278MB/s)(2671MiB/10066msec) 00:26:00.884 slat (usec): min=10, max=44288, avg=912.95, stdev=2484.76 00:26:00.884 clat (msec): min=3, max=174, avg=59.28, stdev=24.99 00:26:00.884 lat (msec): min=3, max=181, avg=60.19, stdev=25.33 00:26:00.884 clat percentiles (msec): 00:26:00.884 | 1.00th=[ 23], 5.00th=[ 29], 10.00th=[ 35], 20.00th=[ 41], 00:26:00.884 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 59], 00:26:00.884 | 70.00th=[ 68], 80.00th=[ 78], 90.00th=[ 92], 95.00th=[ 110], 00:26:00.884 | 99.00th=[ 142], 99.50th=[ 155], 99.90th=[ 165], 99.95th=[ 174], 00:26:00.884 | 99.99th=[ 176] 00:26:00.884 bw ( KiB/s): min=122880, max=420864, per=12.35%, avg=271923.20, stdev=82524.57, samples=20 00:26:00.884 iops : min= 480, max= 1644, avg=1062.20, stdev=322.36, samples=20 00:26:00.884 lat (msec) : 4=0.06%, 10=0.28%, 20=0.36%, 50=45.71%, 100=46.29% 00:26:00.884 lat (msec) : 250=7.30% 00:26:00.884 cpu : usr=0.59%, sys=4.16%, ctx=2210, majf=0, minf=4097 00:26:00.884 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:26:00.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.884 issued rwts: total=10685,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.884 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.884 job3: (groupid=0, jobs=1): err= 0: pid=362253: Thu Jul 25 13:53:55 2024 00:26:00.884 read: IOPS=559, BW=140MiB/s (147MB/s)(1411MiB/10086msec) 00:26:00.884 slat (usec): min=16, max=67955, avg=1638.33, stdev=4871.72 00:26:00.884 clat (msec): min=2, max=270, avg=112.52, stdev=39.04 00:26:00.884 lat (msec): min=2, max=270, avg=114.16, stdev=39.69 00:26:00.884 clat percentiles (msec): 00:26:00.884 | 1.00th=[ 11], 5.00th=[ 47], 10.00th=[ 59], 20.00th=[ 82], 00:26:00.884 | 30.00th=[ 96], 40.00th=[ 106], 50.00th=[ 115], 60.00th=[ 125], 00:26:00.884 | 70.00th=[ 133], 80.00th=[ 144], 90.00th=[ 159], 95.00th=[ 171], 00:26:00.884 | 99.00th=[ 207], 99.50th=[ 218], 99.90th=[ 251], 99.95th=[ 251], 00:26:00.884 | 99.99th=[ 271] 00:26:00.884 bw ( KiB/s): min=93184, max=268288, per=6.49%, avg=142861.65, stdev=40306.39, samples=20 00:26:00.884 iops : min= 364, max= 1048, avg=558.05, stdev=157.45, samples=20 00:26:00.884 lat (msec) : 4=0.09%, 10=0.83%, 20=1.19%, 50=3.93%, 100=27.44% 00:26:00.884 lat (msec) : 250=66.40%, 500=0.12% 00:26:00.884 cpu : usr=0.38%, sys=2.82%, ctx=1351, majf=0, minf=4097 00:26:00.884 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:00.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.884 issued rwts: total=5645,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.884 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.884 job4: (groupid=0, jobs=1): err= 0: pid=362254: Thu Jul 25 13:53:55 2024 00:26:00.884 read: IOPS=899, BW=225MiB/s (236MB/s)(2268MiB/10084msec) 00:26:00.884 slat (usec): min=9, max=138647, avg=798.18, stdev=3284.02 00:26:00.884 clat (usec): min=856, max=268741, avg=70205.28, stdev=42289.36 00:26:00.884 lat (usec): min=899, max=344908, avg=71003.47, stdev=42822.43 00:26:00.884 clat percentiles (msec): 00:26:00.884 | 1.00th=[ 6], 5.00th=[ 13], 10.00th=[ 20], 20.00th=[ 33], 00:26:00.884 | 30.00th=[ 45], 40.00th=[ 54], 50.00th=[ 63], 60.00th=[ 72], 00:26:00.884 | 70.00th=[ 89], 80.00th=[ 107], 90.00th=[ 130], 95.00th=[ 140], 00:26:00.884 | 99.00th=[ 199], 99.50th=[ 241], 99.90th=[ 266], 99.95th=[ 266], 00:26:00.884 | 99.99th=[ 271] 00:26:00.884 bw ( KiB/s): min=118784, max=428032, per=10.48%, avg=230604.80, stdev=91484.50, samples=20 00:26:00.884 iops : min= 464, max= 1672, avg=900.80, stdev=357.36, samples=20 00:26:00.884 lat (usec) : 1000=0.01% 00:26:00.884 lat (msec) : 2=0.21%, 4=0.34%, 10=2.95%, 20=6.58%, 50=25.02% 00:26:00.884 lat (msec) : 100=41.01%, 250=23.72%, 500=0.14% 00:26:00.884 cpu : usr=0.40%, sys=3.65%, ctx=2297, majf=0, minf=4097 00:26:00.884 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:00.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.884 issued rwts: total=9071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.884 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.884 job5: (groupid=0, jobs=1): err= 0: pid=362256: Thu Jul 25 13:53:55 2024 00:26:00.884 read: IOPS=737, BW=184MiB/s (193MB/s)(1850MiB/10036msec) 00:26:00.884 slat (usec): min=9, max=159050, avg=1199.33, stdev=4454.65 00:26:00.884 clat (msec): min=2, max=315, avg=85.47, stdev=44.87 00:26:00.884 lat (msec): min=2, max=315, avg=86.67, stdev=45.53 00:26:00.884 clat percentiles (msec): 00:26:00.884 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 34], 20.00th=[ 55], 00:26:00.884 | 30.00th=[ 63], 40.00th=[ 68], 50.00th=[ 74], 60.00th=[ 83], 00:26:00.884 | 70.00th=[ 101], 80.00th=[ 125], 90.00th=[ 155], 95.00th=[ 163], 00:26:00.884 | 99.00th=[ 209], 99.50th=[ 228], 99.90th=[ 236], 99.95th=[ 249], 00:26:00.884 | 99.99th=[ 317] 00:26:00.884 bw ( KiB/s): min=72704, max=347648, per=8.53%, avg=187784.60, stdev=70167.88, samples=20 00:26:00.884 iops : min= 284, max= 1358, avg=733.50, stdev=274.10, samples=20 00:26:00.884 lat (msec) : 4=0.86%, 10=2.23%, 20=3.11%, 50=9.65%, 100=53.99% 00:26:00.884 lat (msec) : 250=30.13%, 500=0.03% 00:26:00.884 cpu : usr=0.46%, sys=3.25%, ctx=1635, majf=0, minf=4097 00:26:00.884 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:26:00.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.884 issued rwts: total=7399,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.884 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.884 job6: (groupid=0, jobs=1): err= 0: pid=362260: Thu Jul 25 13:53:55 2024 00:26:00.884 read: IOPS=662, BW=166MiB/s (174MB/s)(1665MiB/10055msec) 00:26:00.884 slat (usec): min=8, max=115544, avg=1223.07, stdev=4941.23 00:26:00.884 clat (msec): min=2, max=239, avg=95.30, stdev=49.94 00:26:00.884 lat (msec): min=2, max=239, avg=96.52, stdev=50.79 00:26:00.884 clat percentiles (msec): 00:26:00.884 | 1.00th=[ 6], 5.00th=[ 20], 10.00th=[ 28], 20.00th=[ 44], 00:26:00.884 | 30.00th=[ 63], 40.00th=[ 75], 50.00th=[ 99], 60.00th=[ 115], 00:26:00.885 | 70.00th=[ 131], 80.00th=[ 142], 90.00th=[ 159], 95.00th=[ 174], 00:26:00.885 | 99.00th=[ 194], 99.50th=[ 203], 99.90th=[ 218], 99.95th=[ 220], 00:26:00.885 | 99.99th=[ 241] 00:26:00.885 bw ( KiB/s): min=90624, max=316928, per=7.67%, avg=168908.80, stdev=64371.74, samples=20 00:26:00.885 iops : min= 354, max= 1238, avg=659.80, stdev=251.45, samples=20 00:26:00.885 lat (msec) : 4=0.36%, 10=2.22%, 20=2.63%, 50=18.86%, 100=26.35% 00:26:00.885 lat (msec) : 250=49.59% 00:26:00.885 cpu : usr=0.36%, sys=2.76%, ctx=1696, majf=0, minf=4097 00:26:00.885 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:26:00.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.885 issued rwts: total=6661,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.885 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.885 job7: (groupid=0, jobs=1): err= 0: pid=362261: Thu Jul 25 13:53:55 2024 00:26:00.885 read: IOPS=969, BW=242MiB/s (254MB/s)(2436MiB/10045msec) 00:26:00.885 slat (usec): min=11, max=133968, avg=820.43, stdev=3371.60 00:26:00.885 clat (msec): min=2, max=229, avg=65.05, stdev=34.76 00:26:00.885 lat (msec): min=2, max=331, avg=65.87, stdev=35.27 00:26:00.885 clat percentiles (msec): 00:26:00.885 | 1.00th=[ 9], 5.00th=[ 20], 10.00th=[ 27], 20.00th=[ 39], 00:26:00.885 | 30.00th=[ 47], 40.00th=[ 52], 50.00th=[ 59], 60.00th=[ 67], 00:26:00.885 | 70.00th=[ 74], 80.00th=[ 89], 90.00th=[ 120], 95.00th=[ 136], 00:26:00.885 | 99.00th=[ 161], 99.50th=[ 184], 99.90th=[ 205], 99.95th=[ 205], 00:26:00.885 | 99.99th=[ 230] 00:26:00.885 bw ( KiB/s): min=115200, max=429056, per=11.26%, avg=247814.65, stdev=81865.37, samples=20 00:26:00.885 iops : min= 450, max= 1676, avg=968.00, stdev=319.81, samples=20 00:26:00.885 lat (msec) : 4=0.05%, 10=1.30%, 20=4.03%, 50=31.48%, 100=47.72% 00:26:00.885 lat (msec) : 250=15.42% 00:26:00.885 cpu : usr=0.50%, sys=3.83%, ctx=2337, majf=0, minf=4097 00:26:00.885 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:26:00.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.885 issued rwts: total=9743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.885 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.885 job8: (groupid=0, jobs=1): err= 0: pid=362271: Thu Jul 25 13:53:55 2024 00:26:00.885 read: IOPS=852, BW=213MiB/s (223MB/s)(2146MiB/10075msec) 00:26:00.885 slat (usec): min=8, max=107325, avg=1034.15, stdev=3893.73 00:26:00.885 clat (usec): min=1521, max=259871, avg=73951.77, stdev=37765.75 00:26:00.885 lat (usec): min=1564, max=282343, avg=74985.92, stdev=38320.57 00:26:00.885 clat percentiles (msec): 00:26:00.885 | 1.00th=[ 16], 5.00th=[ 29], 10.00th=[ 34], 20.00th=[ 45], 00:26:00.885 | 30.00th=[ 56], 40.00th=[ 62], 50.00th=[ 67], 60.00th=[ 73], 00:26:00.885 | 70.00th=[ 81], 80.00th=[ 93], 90.00th=[ 140], 95.00th=[ 157], 00:26:00.885 | 99.00th=[ 190], 99.50th=[ 203], 99.90th=[ 222], 99.95th=[ 239], 00:26:00.885 | 99.99th=[ 259] 00:26:00.885 bw ( KiB/s): min=103424, max=364544, per=9.91%, avg=218137.60, stdev=71718.58, samples=20 00:26:00.885 iops : min= 404, max= 1424, avg=852.10, stdev=280.15, samples=20 00:26:00.885 lat (msec) : 2=0.01%, 4=0.09%, 10=0.27%, 20=1.82%, 50=21.75% 00:26:00.885 lat (msec) : 100=58.82%, 250=17.21%, 500=0.03% 00:26:00.885 cpu : usr=0.42%, sys=3.42%, ctx=1932, majf=0, minf=4097 00:26:00.885 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:00.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.885 issued rwts: total=8584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.885 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.885 job9: (groupid=0, jobs=1): err= 0: pid=362281: Thu Jul 25 13:53:55 2024 00:26:00.885 read: IOPS=723, BW=181MiB/s (190MB/s)(1823MiB/10078msec) 00:26:00.885 slat (usec): min=9, max=80303, avg=818.02, stdev=3593.06 00:26:00.885 clat (msec): min=2, max=258, avg=87.50, stdev=48.13 00:26:00.885 lat (msec): min=2, max=258, avg=88.32, stdev=48.57 00:26:00.885 clat percentiles (msec): 00:26:00.885 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 27], 20.00th=[ 42], 00:26:00.885 | 30.00th=[ 55], 40.00th=[ 68], 50.00th=[ 84], 60.00th=[ 105], 00:26:00.885 | 70.00th=[ 118], 80.00th=[ 132], 90.00th=[ 153], 95.00th=[ 165], 00:26:00.885 | 99.00th=[ 207], 99.50th=[ 215], 99.90th=[ 257], 99.95th=[ 259], 00:26:00.885 | 99.99th=[ 259] 00:26:00.885 bw ( KiB/s): min=96768, max=429056, per=8.40%, avg=184974.70, stdev=81803.13, samples=20 00:26:00.885 iops : min= 378, max= 1676, avg=722.50, stdev=319.49, samples=20 00:26:00.885 lat (msec) : 4=0.43%, 10=1.87%, 20=4.42%, 50=19.51%, 100=31.82% 00:26:00.885 lat (msec) : 250=41.65%, 500=0.32% 00:26:00.885 cpu : usr=0.35%, sys=2.95%, ctx=2120, majf=0, minf=4097 00:26:00.885 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:26:00.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.885 issued rwts: total=7290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.885 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.885 job10: (groupid=0, jobs=1): err= 0: pid=362294: Thu Jul 25 13:53:55 2024 00:26:00.885 read: IOPS=631, BW=158MiB/s (166MB/s)(1593MiB/10087msec) 00:26:00.885 slat (usec): min=9, max=108362, avg=1402.30, stdev=4493.80 00:26:00.885 clat (msec): min=2, max=265, avg=99.71, stdev=42.75 00:26:00.885 lat (msec): min=2, max=265, avg=101.12, stdev=43.33 00:26:00.885 clat percentiles (msec): 00:26:00.885 | 1.00th=[ 7], 5.00th=[ 34], 10.00th=[ 52], 20.00th=[ 66], 00:26:00.885 | 30.00th=[ 79], 40.00th=[ 86], 50.00th=[ 93], 60.00th=[ 104], 00:26:00.885 | 70.00th=[ 117], 80.00th=[ 140], 90.00th=[ 161], 95.00th=[ 176], 00:26:00.885 | 99.00th=[ 201], 99.50th=[ 220], 99.90th=[ 249], 99.95th=[ 251], 00:26:00.885 | 99.99th=[ 266] 00:26:00.885 bw ( KiB/s): min=89088, max=256512, per=7.34%, avg=161554.35, stdev=49077.90, samples=20 00:26:00.885 iops : min= 348, max= 1002, avg=631.05, stdev=191.70, samples=20 00:26:00.885 lat (msec) : 4=0.20%, 10=1.73%, 20=1.74%, 50=5.52%, 100=48.41% 00:26:00.885 lat (msec) : 250=42.30%, 500=0.09% 00:26:00.885 cpu : usr=0.24%, sys=2.74%, ctx=1468, majf=0, minf=4097 00:26:00.885 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:00.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.885 issued rwts: total=6373,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.885 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.885 00:26:00.885 Run status group 0 (all jobs): 00:26:00.885 READ: bw=2150MiB/s (2254MB/s), 140MiB/s-265MiB/s (147MB/s-278MB/s), io=21.2GiB (22.7GB), run=10036-10087msec 00:26:00.885 00:26:00.885 Disk stats (read/write): 00:26:00.885 nvme0n1: ios=17264/0, merge=0/0, ticks=1242789/0, in_queue=1242789, util=95.87% 00:26:00.885 nvme10n1: ios=13132/0, merge=0/0, ticks=1238669/0, in_queue=1238669, util=96.25% 00:26:00.885 nvme1n1: ios=21283/0, merge=0/0, ticks=1238581/0, in_queue=1238581, util=96.71% 00:26:00.885 nvme2n1: ios=11174/0, merge=0/0, ticks=1227341/0, in_queue=1227341, util=96.92% 00:26:00.885 nvme3n1: ios=18043/0, merge=0/0, ticks=1236089/0, in_queue=1236089, util=97.02% 00:26:00.885 nvme4n1: ios=14737/0, merge=0/0, ticks=1239690/0, in_queue=1239690, util=97.59% 00:26:00.885 nvme5n1: ios=13310/0, merge=0/0, ticks=1242871/0, in_queue=1242871, util=97.78% 00:26:00.885 nvme6n1: ios=19397/0, merge=0/0, ticks=1245069/0, in_queue=1245069, util=98.00% 00:26:00.885 nvme7n1: ios=17102/0, merge=0/0, ticks=1237840/0, in_queue=1237840, util=98.76% 00:26:00.885 nvme8n1: ios=14465/0, merge=0/0, ticks=1237241/0, in_queue=1237241, util=98.98% 00:26:00.885 nvme9n1: ios=12629/0, merge=0/0, ticks=1228567/0, in_queue=1228567, util=99.30% 00:26:00.885 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:00.885 [global] 00:26:00.885 thread=1 00:26:00.885 invalidate=1 00:26:00.885 rw=randwrite 00:26:00.885 time_based=1 00:26:00.885 runtime=10 00:26:00.885 ioengine=libaio 00:26:00.885 direct=1 00:26:00.885 bs=262144 00:26:00.885 iodepth=64 00:26:00.885 norandommap=1 00:26:00.885 numjobs=1 00:26:00.885 00:26:00.885 [job0] 00:26:00.885 filename=/dev/nvme0n1 00:26:00.885 [job1] 00:26:00.885 filename=/dev/nvme10n1 00:26:00.885 [job2] 00:26:00.885 filename=/dev/nvme1n1 00:26:00.885 [job3] 00:26:00.885 filename=/dev/nvme2n1 00:26:00.885 [job4] 00:26:00.885 filename=/dev/nvme3n1 00:26:00.885 [job5] 00:26:00.885 filename=/dev/nvme4n1 00:26:00.885 [job6] 00:26:00.885 filename=/dev/nvme5n1 00:26:00.885 [job7] 00:26:00.885 filename=/dev/nvme6n1 00:26:00.885 [job8] 00:26:00.885 filename=/dev/nvme7n1 00:26:00.885 [job9] 00:26:00.885 filename=/dev/nvme8n1 00:26:00.885 [job10] 00:26:00.885 filename=/dev/nvme9n1 00:26:00.885 Could not set queue depth (nvme0n1) 00:26:00.885 Could not set queue depth (nvme10n1) 00:26:00.885 Could not set queue depth (nvme1n1) 00:26:00.885 Could not set queue depth (nvme2n1) 00:26:00.885 Could not set queue depth (nvme3n1) 00:26:00.885 Could not set queue depth (nvme4n1) 00:26:00.885 Could not set queue depth (nvme5n1) 00:26:00.885 Could not set queue depth (nvme6n1) 00:26:00.885 Could not set queue depth (nvme7n1) 00:26:00.885 Could not set queue depth (nvme8n1) 00:26:00.885 Could not set queue depth (nvme9n1) 00:26:00.885 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.885 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.885 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.885 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.885 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.886 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.886 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.886 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.886 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.886 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.886 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.886 fio-3.35 00:26:00.886 Starting 11 threads 00:26:10.867 00:26:10.867 job0: (groupid=0, jobs=1): err= 0: pid=363967: Thu Jul 25 13:54:07 2024 00:26:10.867 write: IOPS=675, BW=169MiB/s (177MB/s)(1694MiB/10037msec); 0 zone resets 00:26:10.867 slat (usec): min=21, max=44225, avg=1248.36, stdev=2737.79 00:26:10.867 clat (msec): min=2, max=220, avg=93.52, stdev=39.07 00:26:10.867 lat (msec): min=2, max=220, avg=94.76, stdev=39.57 00:26:10.867 clat percentiles (msec): 00:26:10.867 | 1.00th=[ 12], 5.00th=[ 27], 10.00th=[ 39], 20.00th=[ 58], 00:26:10.867 | 30.00th=[ 75], 40.00th=[ 94], 50.00th=[ 100], 60.00th=[ 106], 00:26:10.867 | 70.00th=[ 112], 80.00th=[ 120], 90.00th=[ 136], 95.00th=[ 161], 00:26:10.867 | 99.00th=[ 197], 99.50th=[ 207], 99.90th=[ 213], 99.95th=[ 215], 00:26:10.867 | 99.99th=[ 222] 00:26:10.867 bw ( KiB/s): min=106496, max=327310, per=9.80%, avg=171903.80, stdev=56833.41, samples=20 00:26:10.867 iops : min= 416, max= 1278, avg=671.35, stdev=221.99, samples=20 00:26:10.867 lat (msec) : 4=0.07%, 10=0.44%, 20=3.25%, 50=14.74%, 100=32.61% 00:26:10.867 lat (msec) : 250=48.89% 00:26:10.867 cpu : usr=1.69%, sys=2.18%, ctx=2714, majf=0, minf=1 00:26:10.867 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:26:10.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.867 issued rwts: total=0,6777,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.867 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.867 job1: (groupid=0, jobs=1): err= 0: pid=363968: Thu Jul 25 13:54:07 2024 00:26:10.867 write: IOPS=543, BW=136MiB/s (142MB/s)(1378MiB/10138msec); 0 zone resets 00:26:10.867 slat (usec): min=15, max=43256, avg=1363.95, stdev=3478.58 00:26:10.867 clat (msec): min=3, max=304, avg=116.35, stdev=57.95 00:26:10.867 lat (msec): min=5, max=304, avg=117.72, stdev=58.76 00:26:10.867 clat percentiles (msec): 00:26:10.867 | 1.00th=[ 12], 5.00th=[ 25], 10.00th=[ 37], 20.00th=[ 65], 00:26:10.867 | 30.00th=[ 92], 40.00th=[ 99], 50.00th=[ 107], 60.00th=[ 124], 00:26:10.867 | 70.00th=[ 150], 80.00th=[ 174], 90.00th=[ 199], 95.00th=[ 211], 00:26:10.867 | 99.00th=[ 241], 99.50th=[ 251], 99.90th=[ 296], 99.95th=[ 296], 00:26:10.867 | 99.99th=[ 305] 00:26:10.867 bw ( KiB/s): min=75474, max=213418, per=7.95%, avg=139459.95, stdev=45150.17, samples=20 00:26:10.867 iops : min= 294, max= 833, avg=544.60, stdev=176.48, samples=20 00:26:10.867 lat (msec) : 4=0.02%, 10=0.74%, 20=2.92%, 50=12.05%, 100=29.07% 00:26:10.867 lat (msec) : 250=54.52%, 500=0.67% 00:26:10.867 cpu : usr=1.26%, sys=1.88%, ctx=2878, majf=0, minf=1 00:26:10.867 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:10.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.867 issued rwts: total=0,5510,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.867 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.867 job2: (groupid=0, jobs=1): err= 0: pid=363982: Thu Jul 25 13:54:07 2024 00:26:10.867 write: IOPS=706, BW=177MiB/s (185MB/s)(1792MiB/10137msec); 0 zone resets 00:26:10.867 slat (usec): min=25, max=63502, avg=1274.34, stdev=2794.60 00:26:10.867 clat (msec): min=3, max=288, avg=89.22, stdev=42.85 00:26:10.867 lat (msec): min=3, max=288, avg=90.49, stdev=43.38 00:26:10.867 clat percentiles (msec): 00:26:10.867 | 1.00th=[ 17], 5.00th=[ 39], 10.00th=[ 41], 20.00th=[ 42], 00:26:10.867 | 30.00th=[ 59], 40.00th=[ 72], 50.00th=[ 95], 60.00th=[ 105], 00:26:10.867 | 70.00th=[ 111], 80.00th=[ 118], 90.00th=[ 146], 95.00th=[ 167], 00:26:10.867 | 99.00th=[ 199], 99.50th=[ 209], 99.90th=[ 271], 99.95th=[ 279], 00:26:10.867 | 99.99th=[ 288] 00:26:10.867 bw ( KiB/s): min=109274, max=358400, per=10.38%, avg=181918.85, stdev=68518.50, samples=20 00:26:10.867 iops : min= 426, max= 1400, avg=710.40, stdev=267.66, samples=20 00:26:10.867 lat (msec) : 4=0.04%, 10=0.27%, 20=1.19%, 50=26.11%, 100=28.40% 00:26:10.867 lat (msec) : 250=43.80%, 500=0.20% 00:26:10.867 cpu : usr=2.23%, sys=2.47%, ctx=2469, majf=0, minf=1 00:26:10.867 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:26:10.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.867 issued rwts: total=0,7166,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.867 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.867 job3: (groupid=0, jobs=1): err= 0: pid=363984: Thu Jul 25 13:54:07 2024 00:26:10.867 write: IOPS=574, BW=144MiB/s (151MB/s)(1444MiB/10059msec); 0 zone resets 00:26:10.867 slat (usec): min=24, max=109897, avg=1485.40, stdev=3880.08 00:26:10.867 clat (msec): min=2, max=303, avg=109.89, stdev=57.23 00:26:10.867 lat (msec): min=3, max=303, avg=111.38, stdev=58.11 00:26:10.867 clat percentiles (msec): 00:26:10.867 | 1.00th=[ 10], 5.00th=[ 18], 10.00th=[ 29], 20.00th=[ 64], 00:26:10.867 | 30.00th=[ 87], 40.00th=[ 100], 50.00th=[ 105], 60.00th=[ 113], 00:26:10.867 | 70.00th=[ 129], 80.00th=[ 155], 90.00th=[ 203], 95.00th=[ 213], 00:26:10.867 | 99.00th=[ 232], 99.50th=[ 241], 99.90th=[ 292], 99.95th=[ 292], 00:26:10.867 | 99.99th=[ 305] 00:26:10.867 bw ( KiB/s): min=69632, max=302685, per=8.35%, avg=146341.80, stdev=60244.21, samples=20 00:26:10.867 iops : min= 272, max= 1182, avg=571.50, stdev=235.30, samples=20 00:26:10.867 lat (msec) : 4=0.09%, 10=0.93%, 20=4.83%, 50=11.74%, 100=23.25% 00:26:10.867 lat (msec) : 250=58.73%, 500=0.43% 00:26:10.867 cpu : usr=1.39%, sys=2.01%, ctx=2595, majf=0, minf=1 00:26:10.867 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:10.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.867 issued rwts: total=0,5776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.867 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.867 job4: (groupid=0, jobs=1): err= 0: pid=363985: Thu Jul 25 13:54:07 2024 00:26:10.867 write: IOPS=523, BW=131MiB/s (137MB/s)(1320MiB/10094msec); 0 zone resets 00:26:10.867 slat (usec): min=24, max=32523, avg=1628.46, stdev=3469.24 00:26:10.867 clat (msec): min=4, max=244, avg=120.67, stdev=49.53 00:26:10.867 lat (msec): min=4, max=244, avg=122.30, stdev=50.22 00:26:10.867 clat percentiles (msec): 00:26:10.867 | 1.00th=[ 16], 5.00th=[ 29], 10.00th=[ 52], 20.00th=[ 72], 00:26:10.867 | 30.00th=[ 102], 40.00th=[ 113], 50.00th=[ 125], 60.00th=[ 132], 00:26:10.867 | 70.00th=[ 148], 80.00th=[ 167], 90.00th=[ 182], 95.00th=[ 197], 00:26:10.867 | 99.00th=[ 226], 99.50th=[ 230], 99.90th=[ 234], 99.95th=[ 243], 00:26:10.867 | 99.99th=[ 245] 00:26:10.867 bw ( KiB/s): min=75776, max=263168, per=7.62%, avg=133612.15, stdev=44135.78, samples=20 00:26:10.867 iops : min= 296, max= 1028, avg=521.80, stdev=172.41, samples=20 00:26:10.867 lat (msec) : 10=0.17%, 20=2.23%, 50=7.39%, 100=19.55%, 250=70.66% 00:26:10.867 cpu : usr=1.24%, sys=1.85%, ctx=2176, majf=0, minf=1 00:26:10.867 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:10.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.867 issued rwts: total=0,5280,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.867 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.867 job5: (groupid=0, jobs=1): err= 0: pid=363986: Thu Jul 25 13:54:07 2024 00:26:10.867 write: IOPS=582, BW=146MiB/s (153MB/s)(1471MiB/10094msec); 0 zone resets 00:26:10.867 slat (usec): min=21, max=44483, avg=1413.78, stdev=3093.43 00:26:10.867 clat (msec): min=3, max=244, avg=108.34, stdev=42.01 00:26:10.867 lat (msec): min=3, max=244, avg=109.75, stdev=42.55 00:26:10.867 clat percentiles (msec): 00:26:10.867 | 1.00th=[ 14], 5.00th=[ 38], 10.00th=[ 65], 20.00th=[ 72], 00:26:10.867 | 30.00th=[ 92], 40.00th=[ 104], 50.00th=[ 108], 60.00th=[ 114], 00:26:10.867 | 70.00th=[ 123], 80.00th=[ 133], 90.00th=[ 161], 95.00th=[ 192], 00:26:10.867 | 99.00th=[ 226], 99.50th=[ 232], 99.90th=[ 245], 99.95th=[ 245], 00:26:10.867 | 99.99th=[ 245] 00:26:10.867 bw ( KiB/s): min=78848, max=206848, per=8.50%, avg=149091.05, stdev=33425.13, samples=20 00:26:10.867 iops : min= 308, max= 808, avg=582.25, stdev=130.63, samples=20 00:26:10.867 lat (msec) : 4=0.03%, 10=0.56%, 20=1.56%, 50=5.10%, 100=29.45% 00:26:10.867 lat (msec) : 250=63.29% 00:26:10.867 cpu : usr=1.66%, sys=2.20%, ctx=2469, majf=0, minf=1 00:26:10.867 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:26:10.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.868 issued rwts: total=0,5884,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.868 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.868 job6: (groupid=0, jobs=1): err= 0: pid=363987: Thu Jul 25 13:54:07 2024 00:26:10.868 write: IOPS=813, BW=203MiB/s (213MB/s)(2062MiB/10143msec); 0 zone resets 00:26:10.868 slat (usec): min=26, max=27789, avg=1006.53, stdev=2341.45 00:26:10.868 clat (msec): min=2, max=305, avg=77.66, stdev=41.85 00:26:10.868 lat (msec): min=3, max=305, avg=78.66, stdev=42.42 00:26:10.868 clat percentiles (msec): 00:26:10.868 | 1.00th=[ 11], 5.00th=[ 27], 10.00th=[ 37], 20.00th=[ 47], 00:26:10.868 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 68], 60.00th=[ 79], 00:26:10.868 | 70.00th=[ 102], 80.00th=[ 117], 90.00th=[ 134], 95.00th=[ 157], 00:26:10.868 | 99.00th=[ 182], 99.50th=[ 194], 99.90th=[ 288], 99.95th=[ 296], 00:26:10.868 | 99.99th=[ 305] 00:26:10.868 bw ( KiB/s): min=100553, max=363520, per=11.95%, avg=209601.80, stdev=83402.24, samples=20 00:26:10.868 iops : min= 392, max= 1420, avg=818.60, stdev=325.90, samples=20 00:26:10.868 lat (msec) : 4=0.07%, 10=0.68%, 20=2.33%, 50=38.00%, 100=28.04% 00:26:10.868 lat (msec) : 250=30.61%, 500=0.27% 00:26:10.868 cpu : usr=2.13%, sys=2.63%, ctx=3576, majf=0, minf=1 00:26:10.868 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:10.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.868 issued rwts: total=0,8249,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.868 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.868 job7: (groupid=0, jobs=1): err= 0: pid=363988: Thu Jul 25 13:54:07 2024 00:26:10.868 write: IOPS=516, BW=129MiB/s (135MB/s)(1309MiB/10136msec); 0 zone resets 00:26:10.868 slat (usec): min=25, max=73277, avg=1579.92, stdev=3816.35 00:26:10.868 clat (msec): min=3, max=290, avg=122.25, stdev=54.33 00:26:10.868 lat (msec): min=4, max=290, avg=123.83, stdev=55.10 00:26:10.868 clat percentiles (msec): 00:26:10.868 | 1.00th=[ 14], 5.00th=[ 33], 10.00th=[ 62], 20.00th=[ 78], 00:26:10.868 | 30.00th=[ 95], 40.00th=[ 101], 50.00th=[ 111], 60.00th=[ 131], 00:26:10.868 | 70.00th=[ 148], 80.00th=[ 171], 90.00th=[ 205], 95.00th=[ 220], 00:26:10.868 | 99.00th=[ 243], 99.50th=[ 249], 99.90th=[ 279], 99.95th=[ 279], 00:26:10.868 | 99.99th=[ 292] 00:26:10.868 bw ( KiB/s): min=74240, max=246252, per=7.56%, avg=132494.20, stdev=49961.00, samples=20 00:26:10.868 iops : min= 290, max= 961, avg=517.30, stdev=195.11, samples=20 00:26:10.868 lat (msec) : 4=0.02%, 10=0.42%, 20=2.12%, 50=5.50%, 100=31.47% 00:26:10.868 lat (msec) : 250=60.18%, 500=0.29% 00:26:10.868 cpu : usr=1.11%, sys=1.83%, ctx=2274, majf=0, minf=1 00:26:10.868 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:10.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.868 issued rwts: total=0,5236,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.868 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.868 job8: (groupid=0, jobs=1): err= 0: pid=363990: Thu Jul 25 13:54:07 2024 00:26:10.868 write: IOPS=547, BW=137MiB/s (144MB/s)(1383MiB/10093msec); 0 zone resets 00:26:10.868 slat (usec): min=20, max=30320, avg=1624.26, stdev=3300.08 00:26:10.868 clat (msec): min=2, max=223, avg=115.15, stdev=41.99 00:26:10.868 lat (msec): min=4, max=226, avg=116.77, stdev=42.52 00:26:10.868 clat percentiles (msec): 00:26:10.868 | 1.00th=[ 15], 5.00th=[ 43], 10.00th=[ 71], 20.00th=[ 84], 00:26:10.868 | 30.00th=[ 94], 40.00th=[ 99], 50.00th=[ 110], 60.00th=[ 125], 00:26:10.868 | 70.00th=[ 134], 80.00th=[ 159], 90.00th=[ 176], 95.00th=[ 182], 00:26:10.868 | 99.00th=[ 207], 99.50th=[ 213], 99.90th=[ 220], 99.95th=[ 222], 00:26:10.868 | 99.99th=[ 224] 00:26:10.868 bw ( KiB/s): min=90112, max=234538, per=7.99%, avg=140017.05, stdev=40535.73, samples=20 00:26:10.868 iops : min= 352, max= 916, avg=546.85, stdev=158.33, samples=20 00:26:10.868 lat (msec) : 4=0.04%, 10=0.61%, 20=0.89%, 50=4.88%, 100=37.41% 00:26:10.868 lat (msec) : 250=56.17% 00:26:10.868 cpu : usr=1.68%, sys=2.12%, ctx=2052, majf=0, minf=1 00:26:10.868 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:10.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.868 issued rwts: total=0,5530,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.868 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.868 job9: (groupid=0, jobs=1): err= 0: pid=363991: Thu Jul 25 13:54:07 2024 00:26:10.868 write: IOPS=715, BW=179MiB/s (188MB/s)(1814MiB/10141msec); 0 zone resets 00:26:10.868 slat (usec): min=19, max=116970, avg=1128.64, stdev=3413.14 00:26:10.868 clat (msec): min=3, max=298, avg=88.22, stdev=46.15 00:26:10.868 lat (msec): min=3, max=298, avg=89.35, stdev=46.70 00:26:10.868 clat percentiles (msec): 00:26:10.868 | 1.00th=[ 10], 5.00th=[ 21], 10.00th=[ 33], 20.00th=[ 57], 00:26:10.868 | 30.00th=[ 62], 40.00th=[ 69], 50.00th=[ 75], 60.00th=[ 91], 00:26:10.868 | 70.00th=[ 107], 80.00th=[ 131], 90.00th=[ 153], 95.00th=[ 165], 00:26:10.868 | 99.00th=[ 226], 99.50th=[ 234], 99.90th=[ 279], 99.95th=[ 288], 00:26:10.868 | 99.99th=[ 300] 00:26:10.868 bw ( KiB/s): min=95744, max=340480, per=10.51%, avg=184214.80, stdev=69504.63, samples=20 00:26:10.868 iops : min= 374, max= 1330, avg=719.40, stdev=271.64, samples=20 00:26:10.868 lat (msec) : 4=0.04%, 10=1.21%, 20=3.27%, 50=11.26%, 100=50.12% 00:26:10.868 lat (msec) : 250=33.83%, 500=0.28% 00:26:10.868 cpu : usr=1.46%, sys=2.19%, ctx=3174, majf=0, minf=1 00:26:10.868 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:26:10.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.868 issued rwts: total=0,7257,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.868 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.868 job10: (groupid=0, jobs=1): err= 0: pid=363992: Thu Jul 25 13:54:07 2024 00:26:10.868 write: IOPS=675, BW=169MiB/s (177MB/s)(1700MiB/10065msec); 0 zone resets 00:26:10.868 slat (usec): min=21, max=41608, avg=1329.06, stdev=2974.27 00:26:10.868 clat (msec): min=2, max=250, avg=93.36, stdev=50.62 00:26:10.868 lat (msec): min=3, max=250, avg=94.69, stdev=51.26 00:26:10.868 clat percentiles (msec): 00:26:10.868 | 1.00th=[ 10], 5.00th=[ 27], 10.00th=[ 41], 20.00th=[ 50], 00:26:10.868 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 100], 00:26:10.868 | 70.00th=[ 110], 80.00th=[ 130], 90.00th=[ 178], 95.00th=[ 194], 00:26:10.868 | 99.00th=[ 230], 99.50th=[ 234], 99.90th=[ 247], 99.95th=[ 251], 00:26:10.868 | 99.99th=[ 251] 00:26:10.868 bw ( KiB/s): min=75264, max=286268, per=9.84%, avg=172584.90, stdev=70700.51, samples=20 00:26:10.868 iops : min= 294, max= 1118, avg=674.05, stdev=276.09, samples=20 00:26:10.868 lat (msec) : 4=0.06%, 10=0.99%, 20=2.23%, 50=17.62%, 100=39.54% 00:26:10.868 lat (msec) : 250=39.52%, 500=0.04% 00:26:10.868 cpu : usr=1.77%, sys=2.35%, ctx=2417, majf=0, minf=1 00:26:10.868 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:26:10.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.868 issued rwts: total=0,6801,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.868 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.868 00:26:10.868 Run status group 0 (all jobs): 00:26:10.868 WRITE: bw=1712MiB/s (1795MB/s), 129MiB/s-203MiB/s (135MB/s-213MB/s), io=17.0GiB (18.2GB), run=10037-10143msec 00:26:10.868 00:26:10.868 Disk stats (read/write): 00:26:10.868 nvme0n1: ios=49/13465, merge=0/0, ticks=355/1229307, in_queue=1229662, util=96.75% 00:26:10.868 nvme10n1: ios=49/10898, merge=0/0, ticks=58/1224292, in_queue=1224350, util=96.19% 00:26:10.868 nvme1n1: ios=28/14218, merge=0/0, ticks=51/1219540, in_queue=1219591, util=96.53% 00:26:10.868 nvme2n1: ios=48/11446, merge=0/0, ticks=3242/1222118, in_queue=1225360, util=100.00% 00:26:10.868 nvme3n1: ios=0/10445, merge=0/0, ticks=0/1223025, in_queue=1223025, util=96.81% 00:26:10.868 nvme4n1: ios=21/11653, merge=0/0, ticks=70/1224297, in_queue=1224367, util=97.77% 00:26:10.868 nvme5n1: ios=0/16376, merge=0/0, ticks=0/1221299, in_queue=1221299, util=97.64% 00:26:10.868 nvme6n1: ios=0/10358, merge=0/0, ticks=0/1221939, in_queue=1221939, util=97.84% 00:26:10.868 nvme7n1: ios=43/10946, merge=0/0, ticks=214/1221011, in_queue=1221225, util=100.00% 00:26:10.868 nvme8n1: ios=49/14398, merge=0/0, ticks=4339/1193890, in_queue=1198229, util=100.00% 00:26:10.868 nvme9n1: ios=0/13496, merge=0/0, ticks=0/1223968, in_queue=1223968, util=99.07% 00:26:10.868 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:10.868 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:10.868 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:10.868 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:10.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:10.868 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:10.868 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:10.868 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:10.868 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:26:10.868 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:10.868 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:26:10.868 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:10.868 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:10.868 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.868 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:10.868 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.868 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:10.868 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:11.127 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:11.127 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:11.127 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:11.127 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:11.127 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:26:11.127 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:11.127 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:26:11.127 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:11.127 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:11.127 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.127 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:11.127 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.127 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:11.127 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:11.386 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:11.386 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:11.386 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:11.386 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:26:11.386 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:11.645 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:11.645 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:26:11.645 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:11.645 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:11.645 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.645 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:11.645 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.645 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:11.645 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:11.645 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:11.645 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:11.645 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:11.645 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:11.645 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:26:11.645 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:26:11.645 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:11.645 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:11.645 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:11.645 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.645 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:11.645 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.645 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:11.645 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:11.957 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:11.957 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:11.957 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:11.957 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:11.957 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:26:11.957 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:26:11.957 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:11.957 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:11.957 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:11.957 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.957 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:11.957 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.957 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:11.957 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:12.235 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:12.235 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:12.235 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:12.235 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:12.235 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:26:12.235 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:12.235 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:26:12.236 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:12.236 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:12.236 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.236 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.236 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.236 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:12.236 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:12.495 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:12.495 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:12.495 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:12.495 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:12.495 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:26:12.495 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:12.495 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:26:12.495 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:12.495 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:12.495 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.495 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.495 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.495 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:12.495 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:12.495 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:12.495 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:12.495 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:12.755 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:12.755 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:26:12.755 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:12.755 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:26:12.755 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:12.755 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:12.755 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.755 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.755 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.755 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:12.755 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:12.755 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:12.755 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:12.755 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:12.755 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:12.755 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:26:12.755 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:12.755 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:26:13.014 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:13.014 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:13.014 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.014 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.014 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.014 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:13.014 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:13.014 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:13.014 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:13.014 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:13.014 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:13.014 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:26:13.014 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:13.014 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:26:13.014 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:13.015 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:13.015 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.015 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.015 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.015 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:13.015 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:13.275 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:13.275 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:13.275 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:13.275 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:13.275 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:26:13.275 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:13.275 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:13.275 rmmod nvme_tcp 00:26:13.275 rmmod nvme_fabrics 00:26:13.275 rmmod nvme_keyring 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 355432 ']' 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 355432 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 355432 ']' 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 355432 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 355432 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 355432' 00:26:13.275 killing process with pid 355432 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 355432 00:26:13.275 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 355432 00:26:13.844 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:13.844 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:13.844 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:13.844 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:13.844 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:13.844 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.844 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:13.844 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.751 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:15.751 00:26:15.751 real 1m14.404s 00:26:15.751 user 4m26.588s 00:26:15.751 sys 0m27.794s 00:26:15.751 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:15.751 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:15.751 ************************************ 00:26:15.751 END TEST nvmf_multiconnection 00:26:15.751 ************************************ 00:26:16.010 13:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:16.010 13:54:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:16.010 13:54:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:16.010 13:54:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:16.010 ************************************ 00:26:16.010 START TEST nvmf_initiator_timeout 00:26:16.010 ************************************ 00:26:16.010 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:16.010 * Looking for test storage... 00:26:16.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:16.010 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:16.010 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:16.010 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:16.010 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:16.010 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:16.010 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:16.010 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:26:16.011 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.582 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:22.582 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:26:22.582 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:22.582 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:22.582 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:22.582 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:22.582 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:22.582 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:26:22.582 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:22.582 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:26:22.582 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:26:22.582 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:26:22.582 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:26:22.582 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:26:22.582 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:26:22.582 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:22.582 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:22.582 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:22.582 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:22.582 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:22.582 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:22.582 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:22.582 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:22.582 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:22.583 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:22.583 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:22.583 Found net devices under 0000:af:00.0: cvl_0_0 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:22.583 Found net devices under 0000:af:00.1: cvl_0_1 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:22.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:22.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:26:22.583 00:26:22.583 --- 10.0.0.2 ping statistics --- 00:26:22.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.583 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:22.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:22.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:26:22.583 00:26:22.583 --- 10.0.0.1 ping statistics --- 00:26:22.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.583 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.583 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:22.584 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=370071 00:26:22.584 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 370071 00:26:22.584 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 370071 ']' 00:26:22.584 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.584 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:22.584 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.584 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:22.584 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.843 [2024-07-25 13:54:19.477318] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:22.843 [2024-07-25 13:54:19.477370] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:22.843 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.843 [2024-07-25 13:54:19.520283] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:22.843 [2024-07-25 13:54:19.550837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:22.843 [2024-07-25 13:54:19.590378] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:22.843 [2024-07-25 13:54:19.590418] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:22.843 [2024-07-25 13:54:19.590427] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:22.843 [2024-07-25 13:54:19.590436] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:22.843 [2024-07-25 13:54:19.590443] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:22.843 [2024-07-25 13:54:19.590489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.843 [2024-07-25 13:54:19.590588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:22.843 [2024-07-25 13:54:19.590607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:22.843 [2024-07-25 13:54:19.590612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.781 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:23.781 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:26:23.781 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:23.781 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:23.781 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:23.781 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:23.781 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:23.781 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:23.781 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.781 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:23.781 Malloc0 00:26:23.781 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.781 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:23.781 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.781 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:23.781 Delay0 00:26:23.781 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.781 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:23.781 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.781 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:23.782 [2024-07-25 13:54:20.394570] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:23.782 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.782 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:23.782 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.782 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:23.782 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.782 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:23.782 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.782 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:23.782 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.782 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:23.782 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.782 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:23.782 [2024-07-25 13:54:20.422829] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:23.782 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.782 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:25.162 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:25.162 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:26:25.162 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:25.162 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:25.162 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:26:27.066 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:27.066 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:27.066 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:26:27.066 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:27.066 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:27.066 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:26:27.066 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=370722 00:26:27.066 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:27.066 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:27.066 [global] 00:26:27.066 thread=1 00:26:27.066 invalidate=1 00:26:27.066 rw=write 00:26:27.066 time_based=1 00:26:27.066 runtime=60 00:26:27.066 ioengine=libaio 00:26:27.066 direct=1 00:26:27.066 bs=4096 00:26:27.066 iodepth=1 00:26:27.066 norandommap=0 00:26:27.066 numjobs=1 00:26:27.066 00:26:27.066 verify_dump=1 00:26:27.066 verify_backlog=512 00:26:27.066 verify_state_save=0 00:26:27.066 do_verify=1 00:26:27.066 verify=crc32c-intel 00:26:27.066 [job0] 00:26:27.066 filename=/dev/nvme0n1 00:26:27.066 Could not set queue depth (nvme0n1) 00:26:27.325 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:27.325 fio-3.35 00:26:27.325 Starting 1 thread 00:26:29.860 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:29.860 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.860 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:30.119 true 00:26:30.119 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.119 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:30.119 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.119 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:30.119 true 00:26:30.119 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.119 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:30.119 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.119 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:30.119 true 00:26:30.119 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.119 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:30.119 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.119 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:30.119 true 00:26:30.119 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.119 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:33.453 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:33.453 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.453 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:33.453 true 00:26:33.453 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.453 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:33.453 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.453 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:33.453 true 00:26:33.453 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.453 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:33.453 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.453 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:33.453 true 00:26:33.453 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.453 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:33.453 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.453 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:33.453 true 00:26:33.453 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.453 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:33.453 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 370722 00:27:29.690 00:27:29.690 job0: (groupid=0, jobs=1): err= 0: pid=370982: Thu Jul 25 13:55:24 2024 00:27:29.690 read: IOPS=131, BW=527KiB/s (540kB/s)(30.9MiB/60011msec) 00:27:29.690 slat (usec): min=8, max=10319, avg=12.63, stdev=156.28 00:27:29.690 clat (usec): min=418, max=41610k, avg=7267.98, stdev=467917.50 00:27:29.690 lat (usec): min=428, max=41610k, avg=7280.61, stdev=467917.70 00:27:29.690 clat percentiles (usec): 00:27:29.690 | 1.00th=[ 474], 5.00th=[ 510], 10.00th=[ 519], 00:27:29.690 | 20.00th=[ 529], 30.00th=[ 529], 40.00th=[ 537], 00:27:29.690 | 50.00th=[ 537], 60.00th=[ 545], 70.00th=[ 545], 00:27:29.690 | 80.00th=[ 553], 90.00th=[ 562], 95.00th=[ 603], 00:27:29.690 | 99.00th=[ 41157], 99.50th=[ 41681], 99.90th=[ 42206], 00:27:29.690 | 99.95th=[ 42730], 99.99th=[17112761] 00:27:29.690 write: IOPS=136, BW=546KiB/s (559kB/s)(32.0MiB/60011msec); 0 zone resets 00:27:29.690 slat (usec): min=11, max=28910, avg=16.63, stdev=319.28 00:27:29.690 clat (usec): min=226, max=505, avg=273.82, stdev=25.69 00:27:29.690 lat (usec): min=239, max=29338, avg=290.46, stdev=322.02 00:27:29.690 clat percentiles (usec): 00:27:29.690 | 1.00th=[ 241], 5.00th=[ 249], 10.00th=[ 253], 20.00th=[ 260], 00:27:29.690 | 30.00th=[ 265], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:27:29.690 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 306], 00:27:29.690 | 99.00th=[ 396], 99.50th=[ 404], 99.90th=[ 420], 99.95th=[ 433], 00:27:29.690 | 99.99th=[ 506] 00:27:29.690 bw ( KiB/s): min= 1688, max= 6128, per=100.00%, avg=4369.07, stdev=1122.74, samples=15 00:27:29.690 iops : min= 422, max= 1532, avg=1092.27, stdev=280.68, samples=15 00:27:29.690 lat (usec) : 250=3.24%, 500=49.58%, 750=45.39% 00:27:29.690 lat (msec) : 2=0.01%, 50=1.78%, >=2000=0.01% 00:27:29.690 cpu : usr=0.28%, sys=0.46%, ctx=16105, majf=0, minf=2 00:27:29.690 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:29.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.690 issued rwts: total=7909,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:29.690 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:29.690 00:27:29.690 Run status group 0 (all jobs): 00:27:29.690 READ: bw=527KiB/s (540kB/s), 527KiB/s-527KiB/s (540kB/s-540kB/s), io=30.9MiB (32.4MB), run=60011-60011msec 00:27:29.690 WRITE: bw=546KiB/s (559kB/s), 546KiB/s-546KiB/s (559kB/s-559kB/s), io=32.0MiB (33.6MB), run=60011-60011msec 00:27:29.690 00:27:29.690 Disk stats (read/write): 00:27:29.690 nvme0n1: ios=7958/8192, merge=0/0, ticks=16984/2150, in_queue=19134, util=99.95% 00:27:29.690 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:29.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:29.690 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:29.690 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:27:29.690 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:29.690 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:29.690 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:29.690 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:29.690 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:27:29.690 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:29.690 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:29.690 nvmf hotplug test: fio successful as expected 00:27:29.690 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:29.690 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.690 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:29.690 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.690 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:29.690 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:29.690 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:29.690 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:29.690 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:27:29.690 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:29.690 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:27:29.690 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:29.690 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:29.690 rmmod nvme_tcp 00:27:29.690 rmmod nvme_fabrics 00:27:29.690 rmmod nvme_keyring 00:27:29.690 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:29.690 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:27:29.690 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:27:29.691 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 370071 ']' 00:27:29.691 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 370071 00:27:29.691 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 370071 ']' 00:27:29.691 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 370071 00:27:29.691 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:27:29.691 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:29.691 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 370071 00:27:29.691 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:29.691 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:29.691 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 370071' 00:27:29.691 killing process with pid 370071 00:27:29.691 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 370071 00:27:29.691 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 370071 00:27:29.691 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:29.691 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:29.691 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:29.691 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:29.691 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:29.691 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.691 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:29.691 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.950 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:29.950 00:27:29.950 real 1m14.106s 00:27:29.950 user 4m27.631s 00:27:29.950 sys 0m8.797s 00:27:29.950 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:29.950 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:29.950 ************************************ 00:27:29.950 END TEST nvmf_initiator_timeout 00:27:29.950 ************************************ 00:27:30.209 13:55:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:27:30.209 13:55:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:27:30.209 13:55:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:27:30.209 13:55:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:27:30.209 13:55:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:36.787 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:36.787 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.787 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:36.788 Found net devices under 0000:af:00.0: cvl_0_0 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:36.788 Found net devices under 0000:af:00.1: cvl_0_1 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:36.788 ************************************ 00:27:36.788 START TEST nvmf_perf_adq 00:27:36.788 ************************************ 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:36.788 * Looking for test storage... 00:27:36.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:36.788 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:43.361 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:43.361 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:43.361 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:43.362 Found net devices under 0000:af:00.0: cvl_0_0 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:43.362 Found net devices under 0000:af:00.1: cvl_0_1 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:27:43.362 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:44.336 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:46.272 13:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:51.545 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:51.545 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:51.545 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:51.546 Found net devices under 0000:af:00.0: cvl_0_0 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:51.546 Found net devices under 0000:af:00.1: cvl_0_1 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:51.546 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:51.804 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:51.804 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:51.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:51.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:27:51.804 00:27:51.804 --- 10.0.0.2 ping statistics --- 00:27:51.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.804 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:27:51.804 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:51.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:51.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:27:51.804 00:27:51.804 --- 10.0.0.1 ping statistics --- 00:27:51.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.804 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:27:51.804 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:51.804 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:51.804 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:51.804 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:51.804 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:51.804 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:51.804 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:51.804 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:51.804 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:51.804 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:51.804 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:51.804 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:51.804 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.804 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=389005 00:27:51.804 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 389005 00:27:51.804 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:51.804 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 389005 ']' 00:27:51.804 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.804 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:51.804 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.804 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:51.804 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.804 [2024-07-25 13:55:48.571424] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:27:51.804 [2024-07-25 13:55:48.571480] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:51.804 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.804 [2024-07-25 13:55:48.616122] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:51.804 [2024-07-25 13:55:48.651691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:52.063 [2024-07-25 13:55:48.692590] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:52.063 [2024-07-25 13:55:48.692632] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:52.063 [2024-07-25 13:55:48.692641] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:52.063 [2024-07-25 13:55:48.692650] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:52.063 [2024-07-25 13:55:48.692657] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:52.063 [2024-07-25 13:55:48.692706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.063 [2024-07-25 13:55:48.692825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:52.063 [2024-07-25 13:55:48.692848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:52.063 [2024-07-25 13:55:48.692850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.630 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:52.630 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:27:52.630 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:52.630 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:52.630 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.630 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:52.631 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:27:52.631 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:52.631 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:52.631 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.631 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.631 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.631 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:52.631 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:52.631 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.631 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.631 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.631 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:52.631 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.631 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.889 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.890 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:52.890 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.890 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.890 [2024-07-25 13:55:49.577480] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.890 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.890 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:52.890 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.890 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.890 Malloc1 00:27:52.890 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.890 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:52.890 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.890 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.890 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.890 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:52.890 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.890 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.890 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.890 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:52.890 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.890 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.890 [2024-07-25 13:55:49.631884] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.890 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.890 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=389175 00:27:52.890 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:27:52.890 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:52.890 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.795 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:27:54.795 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.795 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:54.795 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.795 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:27:54.795 "tick_rate": 2500000000, 00:27:54.795 "poll_groups": [ 00:27:54.795 { 00:27:54.795 "name": "nvmf_tgt_poll_group_000", 00:27:54.795 "admin_qpairs": 1, 00:27:54.795 "io_qpairs": 1, 00:27:54.795 "current_admin_qpairs": 1, 00:27:54.795 "current_io_qpairs": 1, 00:27:54.795 "pending_bdev_io": 0, 00:27:54.795 "completed_nvme_io": 21244, 00:27:54.795 "transports": [ 00:27:54.795 { 00:27:54.795 "trtype": "TCP" 00:27:54.795 } 00:27:54.795 ] 00:27:54.795 }, 00:27:54.795 { 00:27:54.795 "name": "nvmf_tgt_poll_group_001", 00:27:54.795 "admin_qpairs": 0, 00:27:54.795 "io_qpairs": 1, 00:27:54.795 "current_admin_qpairs": 0, 00:27:54.795 "current_io_qpairs": 1, 00:27:54.795 "pending_bdev_io": 0, 00:27:54.795 "completed_nvme_io": 20896, 00:27:54.795 "transports": [ 00:27:54.795 { 00:27:54.795 "trtype": "TCP" 00:27:54.795 } 00:27:54.795 ] 00:27:54.795 }, 00:27:54.795 { 00:27:54.795 "name": "nvmf_tgt_poll_group_002", 00:27:54.795 "admin_qpairs": 0, 00:27:54.795 "io_qpairs": 1, 00:27:54.795 "current_admin_qpairs": 0, 00:27:54.795 "current_io_qpairs": 1, 00:27:54.795 "pending_bdev_io": 0, 00:27:54.795 "completed_nvme_io": 21148, 00:27:54.795 "transports": [ 00:27:54.795 { 00:27:54.795 "trtype": "TCP" 00:27:54.795 } 00:27:54.795 ] 00:27:54.795 }, 00:27:54.795 { 00:27:54.795 "name": "nvmf_tgt_poll_group_003", 00:27:54.795 "admin_qpairs": 0, 00:27:54.795 "io_qpairs": 1, 00:27:54.795 "current_admin_qpairs": 0, 00:27:54.795 "current_io_qpairs": 1, 00:27:54.795 "pending_bdev_io": 0, 00:27:54.795 "completed_nvme_io": 21143, 00:27:54.795 "transports": [ 00:27:54.795 { 00:27:54.795 "trtype": "TCP" 00:27:54.795 } 00:27:54.795 ] 00:27:54.795 } 00:27:54.795 ] 00:27:54.795 }' 00:27:54.795 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:54.795 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:27:55.054 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:27:55.054 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:27:55.054 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 389175 00:28:03.177 Initializing NVMe Controllers 00:28:03.177 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:03.177 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:03.177 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:03.177 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:03.177 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:03.177 Initialization complete. Launching workers. 00:28:03.177 ======================================================== 00:28:03.177 Latency(us) 00:28:03.177 Device Information : IOPS MiB/s Average min max 00:28:03.177 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11195.30 43.73 5717.15 2976.46 8865.22 00:28:03.177 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11073.30 43.26 5779.85 2508.54 10828.06 00:28:03.177 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11223.10 43.84 5702.37 1821.31 10549.35 00:28:03.177 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11341.50 44.30 5642.97 2132.01 10375.92 00:28:03.177 ======================================================== 00:28:03.177 Total : 44833.20 175.13 5710.17 1821.31 10828.06 00:28:03.177 00:28:03.177 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:28:03.177 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:03.177 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:28:03.177 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:03.177 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:28:03.177 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:03.177 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:03.177 rmmod nvme_tcp 00:28:03.177 rmmod nvme_fabrics 00:28:03.177 rmmod nvme_keyring 00:28:03.177 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:03.177 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:28:03.177 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:28:03.177 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 389005 ']' 00:28:03.177 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 389005 00:28:03.177 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 389005 ']' 00:28:03.177 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 389005 00:28:03.177 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:28:03.177 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:03.177 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 389005 00:28:03.177 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:03.177 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:03.177 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 389005' 00:28:03.177 killing process with pid 389005 00:28:03.177 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 389005 00:28:03.177 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 389005 00:28:03.437 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:03.437 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:03.437 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:03.437 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:03.437 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:03.437 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.437 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.437 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.344 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:05.344 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:28:05.344 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:28:06.722 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:28:09.323 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:14.597 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:14.597 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.597 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:14.597 Found net devices under 0000:af:00.0: cvl_0_0 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:14.598 Found net devices under 0000:af:00.1: cvl_0_1 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:14.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:14.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:28:14.598 00:28:14.598 --- 10.0.0.2 ping statistics --- 00:28:14.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.598 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:14.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:14.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:28:14.598 00:28:14.598 --- 10.0.0.1 ping statistics --- 00:28:14.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.598 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:14.598 net.core.busy_poll = 1 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:14.598 net.core.busy_read = 1 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:14.598 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:14.598 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:14.598 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:14.598 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:14.598 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:14.598 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:14.598 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:14.598 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:14.598 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=393092 00:28:14.598 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 393092 00:28:14.598 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 393092 ']' 00:28:14.598 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.598 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:14.598 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.598 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:14.598 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:14.598 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:14.598 [2024-07-25 13:56:11.225391] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:28:14.598 [2024-07-25 13:56:11.225453] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:14.598 EAL: No free 2048 kB hugepages reported on node 1 00:28:14.598 [2024-07-25 13:56:11.269525] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:14.598 [2024-07-25 13:56:11.303939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:14.598 [2024-07-25 13:56:11.345862] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:14.598 [2024-07-25 13:56:11.345900] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:14.598 [2024-07-25 13:56:11.345910] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:14.598 [2024-07-25 13:56:11.345919] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:14.598 [2024-07-25 13:56:11.345926] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:14.598 [2024-07-25 13:56:11.345970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.598 [2024-07-25 13:56:11.345994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:14.598 [2024-07-25 13:56:11.346077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:14.598 [2024-07-25 13:56:11.346079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.166 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:15.166 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:28:15.166 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:15.166 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:15.167 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.167 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:15.426 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:28:15.426 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:15.426 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:15.426 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.426 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.426 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.426 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.427 [2024-07-25 13:56:12.190173] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.427 Malloc1 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.427 [2024-07-25 13:56:12.236885] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=393269 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:15.427 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:28:15.427 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.963 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:28:17.963 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.963 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.963 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.963 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:28:17.963 "tick_rate": 2500000000, 00:28:17.963 "poll_groups": [ 00:28:17.963 { 00:28:17.963 "name": "nvmf_tgt_poll_group_000", 00:28:17.963 "admin_qpairs": 1, 00:28:17.963 "io_qpairs": 0, 00:28:17.963 "current_admin_qpairs": 1, 00:28:17.963 "current_io_qpairs": 0, 00:28:17.963 "pending_bdev_io": 0, 00:28:17.963 "completed_nvme_io": 0, 00:28:17.963 "transports": [ 00:28:17.963 { 00:28:17.963 "trtype": "TCP" 00:28:17.963 } 00:28:17.963 ] 00:28:17.963 }, 00:28:17.963 { 00:28:17.963 "name": "nvmf_tgt_poll_group_001", 00:28:17.963 "admin_qpairs": 0, 00:28:17.963 "io_qpairs": 4, 00:28:17.963 "current_admin_qpairs": 0, 00:28:17.963 "current_io_qpairs": 4, 00:28:17.963 "pending_bdev_io": 0, 00:28:17.963 "completed_nvme_io": 45940, 00:28:17.963 "transports": [ 00:28:17.963 { 00:28:17.963 "trtype": "TCP" 00:28:17.963 } 00:28:17.963 ] 00:28:17.963 }, 00:28:17.963 { 00:28:17.963 "name": "nvmf_tgt_poll_group_002", 00:28:17.963 "admin_qpairs": 0, 00:28:17.963 "io_qpairs": 0, 00:28:17.963 "current_admin_qpairs": 0, 00:28:17.963 "current_io_qpairs": 0, 00:28:17.963 "pending_bdev_io": 0, 00:28:17.963 "completed_nvme_io": 0, 00:28:17.963 "transports": [ 00:28:17.963 { 00:28:17.963 "trtype": "TCP" 00:28:17.963 } 00:28:17.963 ] 00:28:17.963 }, 00:28:17.963 { 00:28:17.963 "name": "nvmf_tgt_poll_group_003", 00:28:17.963 "admin_qpairs": 0, 00:28:17.963 "io_qpairs": 0, 00:28:17.963 "current_admin_qpairs": 0, 00:28:17.963 "current_io_qpairs": 0, 00:28:17.963 "pending_bdev_io": 0, 00:28:17.963 "completed_nvme_io": 0, 00:28:17.963 "transports": [ 00:28:17.963 { 00:28:17.963 "trtype": "TCP" 00:28:17.963 } 00:28:17.963 ] 00:28:17.963 } 00:28:17.963 ] 00:28:17.963 }' 00:28:17.963 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:17.963 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:28:17.963 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=3 00:28:17.963 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 3 -lt 2 ]] 00:28:17.963 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 393269 00:28:26.087 Initializing NVMe Controllers 00:28:26.087 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:26.087 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:26.087 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:26.087 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:26.087 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:26.087 Initialization complete. Launching workers. 00:28:26.087 ======================================================== 00:28:26.087 Latency(us) 00:28:26.087 Device Information : IOPS MiB/s Average min max 00:28:26.087 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5865.70 22.91 10933.43 1431.00 56276.49 00:28:26.087 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6054.00 23.65 10572.79 1548.44 54185.70 00:28:26.087 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6482.00 25.32 9874.35 1395.76 56983.50 00:28:26.087 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6075.00 23.73 10567.95 1513.68 55732.88 00:28:26.087 ======================================================== 00:28:26.087 Total : 24476.69 95.61 10473.05 1395.76 56983.50 00:28:26.087 00:28:26.087 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:28:26.087 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:26.087 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:28:26.087 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:26.087 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:28:26.087 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:26.087 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:26.087 rmmod nvme_tcp 00:28:26.087 rmmod nvme_fabrics 00:28:26.087 rmmod nvme_keyring 00:28:26.087 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:26.087 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:28:26.087 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:28:26.087 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 393092 ']' 00:28:26.087 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 393092 00:28:26.087 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 393092 ']' 00:28:26.087 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 393092 00:28:26.087 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:28:26.087 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:26.087 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 393092 00:28:26.087 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:26.088 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:26.088 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 393092' 00:28:26.088 killing process with pid 393092 00:28:26.088 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 393092 00:28:26.088 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 393092 00:28:26.088 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:26.088 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:26.088 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:26.088 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:26.088 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:26.088 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.088 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:26.088 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.994 13:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:27.994 13:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:27.994 00:28:27.994 real 0m51.662s 00:28:27.994 user 2m45.963s 00:28:27.994 sys 0m14.469s 00:28:27.994 13:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:27.994 13:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:27.994 ************************************ 00:28:27.994 END TEST nvmf_perf_adq 00:28:27.994 ************************************ 00:28:28.253 13:56:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:28.253 13:56:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:28.253 13:56:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:28.253 13:56:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:28.253 ************************************ 00:28:28.253 START TEST nvmf_shutdown 00:28:28.253 ************************************ 00:28:28.253 13:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:28.253 * Looking for test storage... 00:28:28.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:28.253 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:28.254 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:28.254 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:28.254 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:28.254 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:28.254 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:28.254 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:28.254 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:28.254 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:28.254 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:28.254 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:28.254 ************************************ 00:28:28.254 START TEST nvmf_shutdown_tc1 00:28:28.254 ************************************ 00:28:28.254 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:28:28.254 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:28:28.254 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:28.254 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:28.254 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:28.254 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:28.254 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:28.254 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:28.254 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:28.254 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:28.254 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:28.254 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:28.254 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:28.254 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:28.254 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:34.825 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:34.825 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:34.825 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:34.825 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:34.825 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:34.825 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:34.825 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:34.825 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:28:34.825 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:34.825 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:28:34.825 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:28:34.825 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:28:34.825 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:28:34.825 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:28:34.825 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:34.825 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:34.825 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:34.825 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:34.825 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:34.825 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:34.825 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:34.825 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:34.825 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:34.825 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:34.826 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:34.826 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:34.826 Found net devices under 0000:af:00.0: cvl_0_0 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:34.826 Found net devices under 0000:af:00.1: cvl_0_1 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:34.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:34.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:28:34.826 00:28:34.826 --- 10.0.0.2 ping statistics --- 00:28:34.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.826 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:34.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:34.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:28:34.826 00:28:34.826 --- 10.0.0.1 ping statistics --- 00:28:34.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.826 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=398628 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 398628 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 398628 ']' 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.826 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:34.827 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.827 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:34.827 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:34.827 [2024-07-25 13:56:31.637004] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:28:34.827 [2024-07-25 13:56:31.637057] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.827 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.827 [2024-07-25 13:56:31.678105] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:34.827 [2024-07-25 13:56:31.711163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:35.119 [2024-07-25 13:56:31.750692] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:35.119 [2024-07-25 13:56:31.750736] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:35.119 [2024-07-25 13:56:31.750746] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:35.119 [2024-07-25 13:56:31.750755] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:35.119 [2024-07-25 13:56:31.750762] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:35.119 [2024-07-25 13:56:31.750864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:35.119 [2024-07-25 13:56:31.750949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:35.119 [2024-07-25 13:56:31.751056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.119 [2024-07-25 13:56:31.751057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:35.718 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:35.718 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:28:35.718 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:35.718 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:35.718 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:35.718 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:35.718 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:35.718 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.718 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:35.718 [2024-07-25 13:56:32.500219] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:35.718 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.718 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:35.718 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:35.718 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:35.719 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:35.719 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:35.719 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.719 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:35.719 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.719 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:35.719 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.719 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:35.719 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.719 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:35.719 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.719 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:35.719 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.719 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:35.719 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.719 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:35.719 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.719 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:35.719 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.719 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:35.719 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.719 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:35.719 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:35.719 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.719 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:35.719 Malloc1 00:28:35.978 [2024-07-25 13:56:32.615138] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.978 Malloc2 00:28:35.978 Malloc3 00:28:35.978 Malloc4 00:28:35.978 Malloc5 00:28:35.978 Malloc6 00:28:35.978 Malloc7 00:28:36.238 Malloc8 00:28:36.238 Malloc9 00:28:36.238 Malloc10 00:28:36.238 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.238 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:36.238 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:36.238 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:36.238 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=398938 00:28:36.238 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 398938 /var/tmp/bdevperf.sock 00:28:36.238 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 398938 ']' 00:28:36.238 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:36.238 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:36.238 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:36.238 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:36.238 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:36.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:36.238 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:36.238 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:36.238 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:36.238 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:36.238 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.238 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.238 { 00:28:36.238 "params": { 00:28:36.238 "name": "Nvme$subsystem", 00:28:36.238 "trtype": "$TEST_TRANSPORT", 00:28:36.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.238 "adrfam": "ipv4", 00:28:36.238 "trsvcid": "$NVMF_PORT", 00:28:36.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.238 "hdgst": ${hdgst:-false}, 00:28:36.238 "ddgst": ${ddgst:-false} 00:28:36.238 }, 00:28:36.238 "method": "bdev_nvme_attach_controller" 00:28:36.238 } 00:28:36.238 EOF 00:28:36.238 )") 00:28:36.238 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:36.238 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.238 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.238 { 00:28:36.238 "params": { 00:28:36.238 "name": "Nvme$subsystem", 00:28:36.238 "trtype": "$TEST_TRANSPORT", 00:28:36.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.238 "adrfam": "ipv4", 00:28:36.238 "trsvcid": "$NVMF_PORT", 00:28:36.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.238 "hdgst": ${hdgst:-false}, 00:28:36.238 "ddgst": ${ddgst:-false} 00:28:36.238 }, 00:28:36.238 "method": "bdev_nvme_attach_controller" 00:28:36.238 } 00:28:36.238 EOF 00:28:36.238 )") 00:28:36.238 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:36.238 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.238 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.238 { 00:28:36.238 "params": { 00:28:36.238 "name": "Nvme$subsystem", 00:28:36.238 "trtype": "$TEST_TRANSPORT", 00:28:36.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.238 "adrfam": "ipv4", 00:28:36.238 "trsvcid": "$NVMF_PORT", 00:28:36.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.238 "hdgst": ${hdgst:-false}, 00:28:36.238 "ddgst": ${ddgst:-false} 00:28:36.238 }, 00:28:36.238 "method": "bdev_nvme_attach_controller" 00:28:36.238 } 00:28:36.238 EOF 00:28:36.238 )") 00:28:36.238 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:36.238 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.238 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.238 { 00:28:36.239 "params": { 00:28:36.239 "name": "Nvme$subsystem", 00:28:36.239 "trtype": "$TEST_TRANSPORT", 00:28:36.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.239 "adrfam": "ipv4", 00:28:36.239 "trsvcid": "$NVMF_PORT", 00:28:36.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.239 "hdgst": ${hdgst:-false}, 00:28:36.239 "ddgst": ${ddgst:-false} 00:28:36.239 }, 00:28:36.239 "method": "bdev_nvme_attach_controller" 00:28:36.239 } 00:28:36.239 EOF 00:28:36.239 )") 00:28:36.239 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:36.239 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.239 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.239 { 00:28:36.239 "params": { 00:28:36.239 "name": "Nvme$subsystem", 00:28:36.239 "trtype": "$TEST_TRANSPORT", 00:28:36.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.239 "adrfam": "ipv4", 00:28:36.239 "trsvcid": "$NVMF_PORT", 00:28:36.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.239 "hdgst": ${hdgst:-false}, 00:28:36.239 "ddgst": ${ddgst:-false} 00:28:36.239 }, 00:28:36.239 "method": "bdev_nvme_attach_controller" 00:28:36.239 } 00:28:36.239 EOF 00:28:36.239 )") 00:28:36.239 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:36.239 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.239 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.239 { 00:28:36.239 "params": { 00:28:36.239 "name": "Nvme$subsystem", 00:28:36.239 "trtype": "$TEST_TRANSPORT", 00:28:36.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.239 "adrfam": "ipv4", 00:28:36.239 "trsvcid": "$NVMF_PORT", 00:28:36.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.239 "hdgst": ${hdgst:-false}, 00:28:36.239 "ddgst": ${ddgst:-false} 00:28:36.239 }, 00:28:36.239 "method": "bdev_nvme_attach_controller" 00:28:36.239 } 00:28:36.239 EOF 00:28:36.239 )") 00:28:36.239 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:36.239 [2024-07-25 13:56:33.103838] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:28:36.239 [2024-07-25 13:56:33.103889] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:36.239 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.239 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.239 { 00:28:36.239 "params": { 00:28:36.239 "name": "Nvme$subsystem", 00:28:36.239 "trtype": "$TEST_TRANSPORT", 00:28:36.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.239 "adrfam": "ipv4", 00:28:36.239 "trsvcid": "$NVMF_PORT", 00:28:36.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.239 "hdgst": ${hdgst:-false}, 00:28:36.239 "ddgst": ${ddgst:-false} 00:28:36.239 }, 00:28:36.239 "method": "bdev_nvme_attach_controller" 00:28:36.239 } 00:28:36.239 EOF 00:28:36.239 )") 00:28:36.239 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:36.239 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.239 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.239 { 00:28:36.239 "params": { 00:28:36.239 "name": "Nvme$subsystem", 00:28:36.239 "trtype": "$TEST_TRANSPORT", 00:28:36.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.239 "adrfam": "ipv4", 00:28:36.239 "trsvcid": "$NVMF_PORT", 00:28:36.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.239 "hdgst": ${hdgst:-false}, 00:28:36.239 "ddgst": ${ddgst:-false} 00:28:36.239 }, 00:28:36.239 "method": "bdev_nvme_attach_controller" 00:28:36.239 } 00:28:36.239 EOF 00:28:36.239 )") 00:28:36.239 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:36.239 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.498 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.498 { 00:28:36.498 "params": { 00:28:36.498 "name": "Nvme$subsystem", 00:28:36.498 "trtype": "$TEST_TRANSPORT", 00:28:36.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.498 "adrfam": "ipv4", 00:28:36.498 "trsvcid": "$NVMF_PORT", 00:28:36.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.498 "hdgst": ${hdgst:-false}, 00:28:36.498 "ddgst": ${ddgst:-false} 00:28:36.498 }, 00:28:36.498 "method": "bdev_nvme_attach_controller" 00:28:36.498 } 00:28:36.498 EOF 00:28:36.498 )") 00:28:36.498 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:36.498 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.498 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.498 { 00:28:36.498 "params": { 00:28:36.498 "name": "Nvme$subsystem", 00:28:36.498 "trtype": "$TEST_TRANSPORT", 00:28:36.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.498 "adrfam": "ipv4", 00:28:36.498 "trsvcid": "$NVMF_PORT", 00:28:36.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.498 "hdgst": ${hdgst:-false}, 00:28:36.498 "ddgst": ${ddgst:-false} 00:28:36.498 }, 00:28:36.498 "method": "bdev_nvme_attach_controller" 00:28:36.498 } 00:28:36.498 EOF 00:28:36.498 )") 00:28:36.498 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:36.498 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.498 [2024-07-25 13:56:33.141483] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:36.498 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:36.498 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:36.498 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:36.498 "params": { 00:28:36.498 "name": "Nvme1", 00:28:36.498 "trtype": "tcp", 00:28:36.498 "traddr": "10.0.0.2", 00:28:36.498 "adrfam": "ipv4", 00:28:36.498 "trsvcid": "4420", 00:28:36.498 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:36.498 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:36.498 "hdgst": false, 00:28:36.498 "ddgst": false 00:28:36.498 }, 00:28:36.498 "method": "bdev_nvme_attach_controller" 00:28:36.498 },{ 00:28:36.498 "params": { 00:28:36.498 "name": "Nvme2", 00:28:36.498 "trtype": "tcp", 00:28:36.498 "traddr": "10.0.0.2", 00:28:36.498 "adrfam": "ipv4", 00:28:36.498 "trsvcid": "4420", 00:28:36.498 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:36.498 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:36.498 "hdgst": false, 00:28:36.498 "ddgst": false 00:28:36.498 }, 00:28:36.498 "method": "bdev_nvme_attach_controller" 00:28:36.498 },{ 00:28:36.498 "params": { 00:28:36.498 "name": "Nvme3", 00:28:36.498 "trtype": "tcp", 00:28:36.498 "traddr": "10.0.0.2", 00:28:36.498 "adrfam": "ipv4", 00:28:36.498 "trsvcid": "4420", 00:28:36.498 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:36.498 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:36.498 "hdgst": false, 00:28:36.498 "ddgst": false 00:28:36.498 }, 00:28:36.498 "method": "bdev_nvme_attach_controller" 00:28:36.498 },{ 00:28:36.498 "params": { 00:28:36.498 "name": "Nvme4", 00:28:36.498 "trtype": "tcp", 00:28:36.498 "traddr": "10.0.0.2", 00:28:36.498 "adrfam": "ipv4", 00:28:36.498 "trsvcid": "4420", 00:28:36.498 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:36.498 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:36.498 "hdgst": false, 00:28:36.498 "ddgst": false 00:28:36.498 }, 00:28:36.498 "method": "bdev_nvme_attach_controller" 00:28:36.498 },{ 00:28:36.498 "params": { 00:28:36.498 "name": "Nvme5", 00:28:36.498 "trtype": "tcp", 00:28:36.498 "traddr": "10.0.0.2", 00:28:36.498 "adrfam": "ipv4", 00:28:36.498 "trsvcid": "4420", 00:28:36.498 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:36.498 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:36.498 "hdgst": false, 00:28:36.498 "ddgst": false 00:28:36.498 }, 00:28:36.498 "method": "bdev_nvme_attach_controller" 00:28:36.498 },{ 00:28:36.498 "params": { 00:28:36.498 "name": "Nvme6", 00:28:36.498 "trtype": "tcp", 00:28:36.498 "traddr": "10.0.0.2", 00:28:36.498 "adrfam": "ipv4", 00:28:36.498 "trsvcid": "4420", 00:28:36.498 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:36.498 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:36.498 "hdgst": false, 00:28:36.498 "ddgst": false 00:28:36.498 }, 00:28:36.498 "method": "bdev_nvme_attach_controller" 00:28:36.498 },{ 00:28:36.498 "params": { 00:28:36.498 "name": "Nvme7", 00:28:36.498 "trtype": "tcp", 00:28:36.498 "traddr": "10.0.0.2", 00:28:36.498 "adrfam": "ipv4", 00:28:36.498 "trsvcid": "4420", 00:28:36.498 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:36.498 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:36.498 "hdgst": false, 00:28:36.499 "ddgst": false 00:28:36.499 }, 00:28:36.499 "method": "bdev_nvme_attach_controller" 00:28:36.499 },{ 00:28:36.499 "params": { 00:28:36.499 "name": "Nvme8", 00:28:36.499 "trtype": "tcp", 00:28:36.499 "traddr": "10.0.0.2", 00:28:36.499 "adrfam": "ipv4", 00:28:36.499 "trsvcid": "4420", 00:28:36.499 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:36.499 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:36.499 "hdgst": false, 00:28:36.499 "ddgst": false 00:28:36.499 }, 00:28:36.499 "method": "bdev_nvme_attach_controller" 00:28:36.499 },{ 00:28:36.499 "params": { 00:28:36.499 "name": "Nvme9", 00:28:36.499 "trtype": "tcp", 00:28:36.499 "traddr": "10.0.0.2", 00:28:36.499 "adrfam": "ipv4", 00:28:36.499 "trsvcid": "4420", 00:28:36.499 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:36.499 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:36.499 "hdgst": false, 00:28:36.499 "ddgst": false 00:28:36.499 }, 00:28:36.499 "method": "bdev_nvme_attach_controller" 00:28:36.499 },{ 00:28:36.499 "params": { 00:28:36.499 "name": "Nvme10", 00:28:36.499 "trtype": "tcp", 00:28:36.499 "traddr": "10.0.0.2", 00:28:36.499 "adrfam": "ipv4", 00:28:36.499 "trsvcid": "4420", 00:28:36.499 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:36.499 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:36.499 "hdgst": false, 00:28:36.499 "ddgst": false 00:28:36.499 }, 00:28:36.499 "method": "bdev_nvme_attach_controller" 00:28:36.499 }' 00:28:36.499 [2024-07-25 13:56:33.176903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.499 [2024-07-25 13:56:33.214825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.874 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:37.874 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:28:37.874 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:37.874 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.874 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:37.874 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.874 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 398938 00:28:37.874 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:28:37.874 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 398938 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:37.874 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:28:38.811 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 398628 00:28:38.811 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:38.811 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:38.811 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:38.811 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:38.811 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:38.811 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:38.811 { 00:28:38.811 "params": { 00:28:38.811 "name": "Nvme$subsystem", 00:28:38.811 "trtype": "$TEST_TRANSPORT", 00:28:38.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.811 "adrfam": "ipv4", 00:28:38.811 "trsvcid": "$NVMF_PORT", 00:28:38.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.811 "hdgst": ${hdgst:-false}, 00:28:38.811 "ddgst": ${ddgst:-false} 00:28:38.811 }, 00:28:38.811 "method": "bdev_nvme_attach_controller" 00:28:38.811 } 00:28:38.811 EOF 00:28:38.811 )") 00:28:38.811 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:38.811 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:38.811 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:38.811 { 00:28:38.811 "params": { 00:28:38.811 "name": "Nvme$subsystem", 00:28:38.812 "trtype": "$TEST_TRANSPORT", 00:28:38.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.812 "adrfam": "ipv4", 00:28:38.812 "trsvcid": "$NVMF_PORT", 00:28:38.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.812 "hdgst": ${hdgst:-false}, 00:28:38.812 "ddgst": ${ddgst:-false} 00:28:38.812 }, 00:28:38.812 "method": "bdev_nvme_attach_controller" 00:28:38.812 } 00:28:38.812 EOF 00:28:38.812 )") 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:38.812 { 00:28:38.812 "params": { 00:28:38.812 "name": "Nvme$subsystem", 00:28:38.812 "trtype": "$TEST_TRANSPORT", 00:28:38.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.812 "adrfam": "ipv4", 00:28:38.812 "trsvcid": "$NVMF_PORT", 00:28:38.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.812 "hdgst": ${hdgst:-false}, 00:28:38.812 "ddgst": ${ddgst:-false} 00:28:38.812 }, 00:28:38.812 "method": "bdev_nvme_attach_controller" 00:28:38.812 } 00:28:38.812 EOF 00:28:38.812 )") 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:38.812 { 00:28:38.812 "params": { 00:28:38.812 "name": "Nvme$subsystem", 00:28:38.812 "trtype": "$TEST_TRANSPORT", 00:28:38.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.812 "adrfam": "ipv4", 00:28:38.812 "trsvcid": "$NVMF_PORT", 00:28:38.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.812 "hdgst": ${hdgst:-false}, 00:28:38.812 "ddgst": ${ddgst:-false} 00:28:38.812 }, 00:28:38.812 "method": "bdev_nvme_attach_controller" 00:28:38.812 } 00:28:38.812 EOF 00:28:38.812 )") 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:38.812 { 00:28:38.812 "params": { 00:28:38.812 "name": "Nvme$subsystem", 00:28:38.812 "trtype": "$TEST_TRANSPORT", 00:28:38.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.812 "adrfam": "ipv4", 00:28:38.812 "trsvcid": "$NVMF_PORT", 00:28:38.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.812 "hdgst": ${hdgst:-false}, 00:28:38.812 "ddgst": ${ddgst:-false} 00:28:38.812 }, 00:28:38.812 "method": "bdev_nvme_attach_controller" 00:28:38.812 } 00:28:38.812 EOF 00:28:38.812 )") 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:38.812 { 00:28:38.812 "params": { 00:28:38.812 "name": "Nvme$subsystem", 00:28:38.812 "trtype": "$TEST_TRANSPORT", 00:28:38.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.812 "adrfam": "ipv4", 00:28:38.812 "trsvcid": "$NVMF_PORT", 00:28:38.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.812 "hdgst": ${hdgst:-false}, 00:28:38.812 "ddgst": ${ddgst:-false} 00:28:38.812 }, 00:28:38.812 "method": "bdev_nvme_attach_controller" 00:28:38.812 } 00:28:38.812 EOF 00:28:38.812 )") 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:38.812 [2024-07-25 13:56:35.586812] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:28:38.812 [2024-07-25 13:56:35.586871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid399346 ] 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:38.812 { 00:28:38.812 "params": { 00:28:38.812 "name": "Nvme$subsystem", 00:28:38.812 "trtype": "$TEST_TRANSPORT", 00:28:38.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.812 "adrfam": "ipv4", 00:28:38.812 "trsvcid": "$NVMF_PORT", 00:28:38.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.812 "hdgst": ${hdgst:-false}, 00:28:38.812 "ddgst": ${ddgst:-false} 00:28:38.812 }, 00:28:38.812 "method": "bdev_nvme_attach_controller" 00:28:38.812 } 00:28:38.812 EOF 00:28:38.812 )") 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:38.812 { 00:28:38.812 "params": { 00:28:38.812 "name": "Nvme$subsystem", 00:28:38.812 "trtype": "$TEST_TRANSPORT", 00:28:38.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.812 "adrfam": "ipv4", 00:28:38.812 "trsvcid": "$NVMF_PORT", 00:28:38.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.812 "hdgst": ${hdgst:-false}, 00:28:38.812 "ddgst": ${ddgst:-false} 00:28:38.812 }, 00:28:38.812 "method": "bdev_nvme_attach_controller" 00:28:38.812 } 00:28:38.812 EOF 00:28:38.812 )") 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:38.812 { 00:28:38.812 "params": { 00:28:38.812 "name": "Nvme$subsystem", 00:28:38.812 "trtype": "$TEST_TRANSPORT", 00:28:38.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.812 "adrfam": "ipv4", 00:28:38.812 "trsvcid": "$NVMF_PORT", 00:28:38.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.812 "hdgst": ${hdgst:-false}, 00:28:38.812 "ddgst": ${ddgst:-false} 00:28:38.812 }, 00:28:38.812 "method": "bdev_nvme_attach_controller" 00:28:38.812 } 00:28:38.812 EOF 00:28:38.812 )") 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:38.812 { 00:28:38.812 "params": { 00:28:38.812 "name": "Nvme$subsystem", 00:28:38.812 "trtype": "$TEST_TRANSPORT", 00:28:38.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.812 "adrfam": "ipv4", 00:28:38.812 "trsvcid": "$NVMF_PORT", 00:28:38.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.812 "hdgst": ${hdgst:-false}, 00:28:38.812 "ddgst": ${ddgst:-false} 00:28:38.812 }, 00:28:38.812 "method": "bdev_nvme_attach_controller" 00:28:38.812 } 00:28:38.812 EOF 00:28:38.812 )") 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:38.812 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:38.812 [2024-07-25 13:56:35.626378] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:38.812 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:38.812 "params": { 00:28:38.812 "name": "Nvme1", 00:28:38.812 "trtype": "tcp", 00:28:38.812 "traddr": "10.0.0.2", 00:28:38.812 "adrfam": "ipv4", 00:28:38.812 "trsvcid": "4420", 00:28:38.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:38.812 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:38.812 "hdgst": false, 00:28:38.812 "ddgst": false 00:28:38.812 }, 00:28:38.812 "method": "bdev_nvme_attach_controller" 00:28:38.812 },{ 00:28:38.812 "params": { 00:28:38.812 "name": "Nvme2", 00:28:38.812 "trtype": "tcp", 00:28:38.812 "traddr": "10.0.0.2", 00:28:38.812 "adrfam": "ipv4", 00:28:38.812 "trsvcid": "4420", 00:28:38.812 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:38.812 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:38.812 "hdgst": false, 00:28:38.812 "ddgst": false 00:28:38.812 }, 00:28:38.813 "method": "bdev_nvme_attach_controller" 00:28:38.813 },{ 00:28:38.813 "params": { 00:28:38.813 "name": "Nvme3", 00:28:38.813 "trtype": "tcp", 00:28:38.813 "traddr": "10.0.0.2", 00:28:38.813 "adrfam": "ipv4", 00:28:38.813 "trsvcid": "4420", 00:28:38.813 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:38.813 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:38.813 "hdgst": false, 00:28:38.813 "ddgst": false 00:28:38.813 }, 00:28:38.813 "method": "bdev_nvme_attach_controller" 00:28:38.813 },{ 00:28:38.813 "params": { 00:28:38.813 "name": "Nvme4", 00:28:38.813 "trtype": "tcp", 00:28:38.813 "traddr": "10.0.0.2", 00:28:38.813 "adrfam": "ipv4", 00:28:38.813 "trsvcid": "4420", 00:28:38.813 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:38.813 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:38.813 "hdgst": false, 00:28:38.813 "ddgst": false 00:28:38.813 }, 00:28:38.813 "method": "bdev_nvme_attach_controller" 00:28:38.813 },{ 00:28:38.813 "params": { 00:28:38.813 "name": "Nvme5", 00:28:38.813 "trtype": "tcp", 00:28:38.813 "traddr": "10.0.0.2", 00:28:38.813 "adrfam": "ipv4", 00:28:38.813 "trsvcid": "4420", 00:28:38.813 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:38.813 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:38.813 "hdgst": false, 00:28:38.813 "ddgst": false 00:28:38.813 }, 00:28:38.813 "method": "bdev_nvme_attach_controller" 00:28:38.813 },{ 00:28:38.813 "params": { 00:28:38.813 "name": "Nvme6", 00:28:38.813 "trtype": "tcp", 00:28:38.813 "traddr": "10.0.0.2", 00:28:38.813 "adrfam": "ipv4", 00:28:38.813 "trsvcid": "4420", 00:28:38.813 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:38.813 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:38.813 "hdgst": false, 00:28:38.813 "ddgst": false 00:28:38.813 }, 00:28:38.813 "method": "bdev_nvme_attach_controller" 00:28:38.813 },{ 00:28:38.813 "params": { 00:28:38.813 "name": "Nvme7", 00:28:38.813 "trtype": "tcp", 00:28:38.813 "traddr": "10.0.0.2", 00:28:38.813 "adrfam": "ipv4", 00:28:38.813 "trsvcid": "4420", 00:28:38.813 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:38.813 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:38.813 "hdgst": false, 00:28:38.813 "ddgst": false 00:28:38.813 }, 00:28:38.813 "method": "bdev_nvme_attach_controller" 00:28:38.813 },{ 00:28:38.813 "params": { 00:28:38.813 "name": "Nvme8", 00:28:38.813 "trtype": "tcp", 00:28:38.813 "traddr": "10.0.0.2", 00:28:38.813 "adrfam": "ipv4", 00:28:38.813 "trsvcid": "4420", 00:28:38.813 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:38.813 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:38.813 "hdgst": false, 00:28:38.813 "ddgst": false 00:28:38.813 }, 00:28:38.813 "method": "bdev_nvme_attach_controller" 00:28:38.813 },{ 00:28:38.813 "params": { 00:28:38.813 "name": "Nvme9", 00:28:38.813 "trtype": "tcp", 00:28:38.813 "traddr": "10.0.0.2", 00:28:38.813 "adrfam": "ipv4", 00:28:38.813 "trsvcid": "4420", 00:28:38.813 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:38.813 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:38.813 "hdgst": false, 00:28:38.813 "ddgst": false 00:28:38.813 }, 00:28:38.813 "method": "bdev_nvme_attach_controller" 00:28:38.813 },{ 00:28:38.813 "params": { 00:28:38.813 "name": "Nvme10", 00:28:38.813 "trtype": "tcp", 00:28:38.813 "traddr": "10.0.0.2", 00:28:38.813 "adrfam": "ipv4", 00:28:38.813 "trsvcid": "4420", 00:28:38.813 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:38.813 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:38.813 "hdgst": false, 00:28:38.813 "ddgst": false 00:28:38.813 }, 00:28:38.813 "method": "bdev_nvme_attach_controller" 00:28:38.813 }' 00:28:38.813 [2024-07-25 13:56:35.662282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.073 [2024-07-25 13:56:35.701000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.452 Running I/O for 1 seconds... 00:28:41.390 00:28:41.390 Latency(us) 00:28:41.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.390 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.390 Verification LBA range: start 0x0 length 0x400 00:28:41.390 Nvme1n1 : 1.03 249.63 15.60 0.00 0.00 253945.65 18559.80 208876.34 00:28:41.390 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.390 Verification LBA range: start 0x0 length 0x400 00:28:41.390 Nvme2n1 : 1.04 245.55 15.35 0.00 0.00 249834.91 18559.80 239914.19 00:28:41.390 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.390 Verification LBA range: start 0x0 length 0x400 00:28:41.390 Nvme3n1 : 1.13 284.25 17.77 0.00 0.00 217022.30 18245.22 223136.97 00:28:41.390 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.390 Verification LBA range: start 0x0 length 0x400 00:28:41.390 Nvme4n1 : 1.05 304.71 19.04 0.00 0.00 199152.76 16672.36 201326.59 00:28:41.390 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.390 Verification LBA range: start 0x0 length 0x400 00:28:41.390 Nvme5n1 : 1.15 335.22 20.95 0.00 0.00 179308.00 16462.64 203843.17 00:28:41.390 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.390 Verification LBA range: start 0x0 length 0x400 00:28:41.390 Nvme6n1 : 1.12 285.45 17.84 0.00 0.00 207416.69 16462.64 203004.31 00:28:41.390 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.390 Verification LBA range: start 0x0 length 0x400 00:28:41.390 Nvme7n1 : 1.15 333.44 20.84 0.00 0.00 175330.92 16986.93 204682.04 00:28:41.390 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.390 Verification LBA range: start 0x0 length 0x400 00:28:41.390 Nvme8n1 : 1.16 331.28 20.71 0.00 0.00 174190.32 13631.49 223136.97 00:28:41.390 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.390 Verification LBA range: start 0x0 length 0x400 00:28:41.390 Nvme9n1 : 1.13 282.74 17.67 0.00 0.00 200241.48 16567.50 212231.78 00:28:41.390 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.390 Verification LBA range: start 0x0 length 0x400 00:28:41.390 Nvme10n1 : 1.15 287.85 17.99 0.00 0.00 193330.72 3643.80 229847.86 00:28:41.390 =================================================================================================================== 00:28:41.390 Total : 2940.13 183.76 0.00 0.00 201424.56 3643.80 239914.19 00:28:41.650 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:28:41.650 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:41.650 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:41.650 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:41.650 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:41.650 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:41.650 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:28:41.650 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:41.650 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:28:41.650 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:41.650 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:41.650 rmmod nvme_tcp 00:28:41.650 rmmod nvme_fabrics 00:28:41.650 rmmod nvme_keyring 00:28:41.650 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:41.650 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:28:41.650 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:28:41.650 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 398628 ']' 00:28:41.650 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 398628 00:28:41.650 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 398628 ']' 00:28:41.650 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 398628 00:28:41.650 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:28:41.650 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:41.650 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 398628 00:28:41.650 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:41.650 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:41.650 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 398628' 00:28:41.650 killing process with pid 398628 00:28:41.650 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 398628 00:28:41.650 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 398628 00:28:42.220 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:42.220 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:42.220 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:42.220 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:42.220 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:42.220 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.220 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:42.220 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.128 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:44.128 00:28:44.128 real 0m15.854s 00:28:44.128 user 0m33.561s 00:28:44.128 sys 0m6.665s 00:28:44.128 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:44.128 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:44.128 ************************************ 00:28:44.128 END TEST nvmf_shutdown_tc1 00:28:44.128 ************************************ 00:28:44.128 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:44.129 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:44.129 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:44.129 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:44.389 ************************************ 00:28:44.389 START TEST nvmf_shutdown_tc2 00:28:44.389 ************************************ 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:44.389 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:44.390 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:44.390 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:44.390 Found net devices under 0000:af:00.0: cvl_0_0 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:44.390 Found net devices under 0000:af:00.1: cvl_0_1 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:44.390 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:44.651 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:44.651 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:44.651 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:44.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:44.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:28:44.651 00:28:44.651 --- 10.0.0.2 ping statistics --- 00:28:44.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.651 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:28:44.651 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:44.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:44.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:28:44.651 00:28:44.651 --- 10.0.0.1 ping statistics --- 00:28:44.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.651 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:28:44.651 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:44.651 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:28:44.651 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:44.651 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:44.651 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:44.651 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:44.651 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:44.651 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:44.651 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:44.651 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:44.651 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:44.651 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:44.651 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:44.651 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=400396 00:28:44.651 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 400396 00:28:44.651 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:44.651 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 400396 ']' 00:28:44.651 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:44.651 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:44.651 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:44.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:44.651 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:44.651 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:44.651 [2024-07-25 13:56:41.472153] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:28:44.651 [2024-07-25 13:56:41.472204] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:44.651 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.651 [2024-07-25 13:56:41.513484] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:44.910 [2024-07-25 13:56:41.548490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:44.911 [2024-07-25 13:56:41.588787] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:44.911 [2024-07-25 13:56:41.588826] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:44.911 [2024-07-25 13:56:41.588836] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:44.911 [2024-07-25 13:56:41.588845] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:44.911 [2024-07-25 13:56:41.588852] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:44.911 [2024-07-25 13:56:41.588955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:44.911 [2024-07-25 13:56:41.589053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:44.911 [2024-07-25 13:56:41.589164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.911 [2024-07-25 13:56:41.589165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:45.480 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:45.480 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:45.480 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:45.480 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:45.480 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:45.480 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:45.480 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:45.480 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.480 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:45.480 [2024-07-25 13:56:42.330042] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:45.480 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.480 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:45.480 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:45.480 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:45.480 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:45.480 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:45.480 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:45.480 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:45.480 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:45.480 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:45.480 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:45.480 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:45.480 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:45.480 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:45.740 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:45.740 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:45.740 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:45.740 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:45.740 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:45.740 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:45.740 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:45.740 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:45.740 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:45.740 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:45.740 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:45.740 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:45.740 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:45.740 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.740 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:45.740 Malloc1 00:28:45.740 [2024-07-25 13:56:42.444807] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:45.740 Malloc2 00:28:45.740 Malloc3 00:28:45.740 Malloc4 00:28:45.740 Malloc5 00:28:45.998 Malloc6 00:28:45.998 Malloc7 00:28:45.998 Malloc8 00:28:45.998 Malloc9 00:28:45.998 Malloc10 00:28:45.998 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.998 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:45.998 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:45.998 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:45.998 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=400714 00:28:45.998 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 400714 /var/tmp/bdevperf.sock 00:28:45.998 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 400714 ']' 00:28:45.999 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:45.999 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:45.999 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:45.999 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:45.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:45.999 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:45.999 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:45.999 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:45.999 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:28:45.999 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:28:45.999 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:45.999 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:45.999 { 00:28:45.999 "params": { 00:28:45.999 "name": "Nvme$subsystem", 00:28:45.999 "trtype": "$TEST_TRANSPORT", 00:28:45.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.999 "adrfam": "ipv4", 00:28:45.999 "trsvcid": "$NVMF_PORT", 00:28:45.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.999 "hdgst": ${hdgst:-false}, 00:28:45.999 "ddgst": ${ddgst:-false} 00:28:45.999 }, 00:28:45.999 "method": "bdev_nvme_attach_controller" 00:28:45.999 } 00:28:45.999 EOF 00:28:45.999 )") 00:28:45.999 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:46.259 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:46.259 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:46.259 { 00:28:46.259 "params": { 00:28:46.259 "name": "Nvme$subsystem", 00:28:46.259 "trtype": "$TEST_TRANSPORT", 00:28:46.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.259 "adrfam": "ipv4", 00:28:46.259 "trsvcid": "$NVMF_PORT", 00:28:46.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.259 "hdgst": ${hdgst:-false}, 00:28:46.259 "ddgst": ${ddgst:-false} 00:28:46.259 }, 00:28:46.259 "method": "bdev_nvme_attach_controller" 00:28:46.259 } 00:28:46.259 EOF 00:28:46.259 )") 00:28:46.259 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:46.259 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:46.259 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:46.259 { 00:28:46.259 "params": { 00:28:46.259 "name": "Nvme$subsystem", 00:28:46.259 "trtype": "$TEST_TRANSPORT", 00:28:46.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.259 "adrfam": "ipv4", 00:28:46.259 "trsvcid": "$NVMF_PORT", 00:28:46.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.259 "hdgst": ${hdgst:-false}, 00:28:46.259 "ddgst": ${ddgst:-false} 00:28:46.259 }, 00:28:46.259 "method": "bdev_nvme_attach_controller" 00:28:46.259 } 00:28:46.259 EOF 00:28:46.259 )") 00:28:46.259 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:46.259 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:46.259 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:46.259 { 00:28:46.259 "params": { 00:28:46.259 "name": "Nvme$subsystem", 00:28:46.259 "trtype": "$TEST_TRANSPORT", 00:28:46.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.259 "adrfam": "ipv4", 00:28:46.259 "trsvcid": "$NVMF_PORT", 00:28:46.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.259 "hdgst": ${hdgst:-false}, 00:28:46.259 "ddgst": ${ddgst:-false} 00:28:46.259 }, 00:28:46.259 "method": "bdev_nvme_attach_controller" 00:28:46.259 } 00:28:46.259 EOF 00:28:46.259 )") 00:28:46.259 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:46.259 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:46.259 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:46.259 { 00:28:46.259 "params": { 00:28:46.259 "name": "Nvme$subsystem", 00:28:46.259 "trtype": "$TEST_TRANSPORT", 00:28:46.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.259 "adrfam": "ipv4", 00:28:46.259 "trsvcid": "$NVMF_PORT", 00:28:46.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.259 "hdgst": ${hdgst:-false}, 00:28:46.259 "ddgst": ${ddgst:-false} 00:28:46.259 }, 00:28:46.259 "method": "bdev_nvme_attach_controller" 00:28:46.259 } 00:28:46.259 EOF 00:28:46.259 )") 00:28:46.259 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:46.259 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:46.259 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:46.259 { 00:28:46.259 "params": { 00:28:46.259 "name": "Nvme$subsystem", 00:28:46.259 "trtype": "$TEST_TRANSPORT", 00:28:46.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.259 "adrfam": "ipv4", 00:28:46.259 "trsvcid": "$NVMF_PORT", 00:28:46.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.259 "hdgst": ${hdgst:-false}, 00:28:46.259 "ddgst": ${ddgst:-false} 00:28:46.259 }, 00:28:46.259 "method": "bdev_nvme_attach_controller" 00:28:46.259 } 00:28:46.259 EOF 00:28:46.259 )") 00:28:46.259 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:46.259 [2024-07-25 13:56:42.925986] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:28:46.259 [2024-07-25 13:56:42.926040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid400714 ] 00:28:46.259 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:46.259 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:46.259 { 00:28:46.259 "params": { 00:28:46.259 "name": "Nvme$subsystem", 00:28:46.259 "trtype": "$TEST_TRANSPORT", 00:28:46.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.259 "adrfam": "ipv4", 00:28:46.259 "trsvcid": "$NVMF_PORT", 00:28:46.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.259 "hdgst": ${hdgst:-false}, 00:28:46.259 "ddgst": ${ddgst:-false} 00:28:46.259 }, 00:28:46.259 "method": "bdev_nvme_attach_controller" 00:28:46.259 } 00:28:46.259 EOF 00:28:46.259 )") 00:28:46.260 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:46.260 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:46.260 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:46.260 { 00:28:46.260 "params": { 00:28:46.260 "name": "Nvme$subsystem", 00:28:46.260 "trtype": "$TEST_TRANSPORT", 00:28:46.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.260 "adrfam": "ipv4", 00:28:46.260 "trsvcid": "$NVMF_PORT", 00:28:46.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.260 "hdgst": ${hdgst:-false}, 00:28:46.260 "ddgst": ${ddgst:-false} 00:28:46.260 }, 00:28:46.260 "method": "bdev_nvme_attach_controller" 00:28:46.260 } 00:28:46.260 EOF 00:28:46.260 )") 00:28:46.260 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:46.260 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:46.260 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:46.260 { 00:28:46.260 "params": { 00:28:46.260 "name": "Nvme$subsystem", 00:28:46.260 "trtype": "$TEST_TRANSPORT", 00:28:46.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.260 "adrfam": "ipv4", 00:28:46.260 "trsvcid": "$NVMF_PORT", 00:28:46.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.260 "hdgst": ${hdgst:-false}, 00:28:46.260 "ddgst": ${ddgst:-false} 00:28:46.260 }, 00:28:46.260 "method": "bdev_nvme_attach_controller" 00:28:46.260 } 00:28:46.260 EOF 00:28:46.260 )") 00:28:46.260 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:46.260 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:46.260 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:46.260 { 00:28:46.260 "params": { 00:28:46.260 "name": "Nvme$subsystem", 00:28:46.260 "trtype": "$TEST_TRANSPORT", 00:28:46.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.260 "adrfam": "ipv4", 00:28:46.260 "trsvcid": "$NVMF_PORT", 00:28:46.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.260 "hdgst": ${hdgst:-false}, 00:28:46.260 "ddgst": ${ddgst:-false} 00:28:46.260 }, 00:28:46.260 "method": "bdev_nvme_attach_controller" 00:28:46.260 } 00:28:46.260 EOF 00:28:46.260 )") 00:28:46.260 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:46.260 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.260 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:28:46.260 [2024-07-25 13:56:42.964972] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:46.260 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:28:46.260 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:46.260 "params": { 00:28:46.260 "name": "Nvme1", 00:28:46.260 "trtype": "tcp", 00:28:46.260 "traddr": "10.0.0.2", 00:28:46.260 "adrfam": "ipv4", 00:28:46.260 "trsvcid": "4420", 00:28:46.260 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:46.260 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:46.260 "hdgst": false, 00:28:46.260 "ddgst": false 00:28:46.260 }, 00:28:46.260 "method": "bdev_nvme_attach_controller" 00:28:46.260 },{ 00:28:46.260 "params": { 00:28:46.260 "name": "Nvme2", 00:28:46.260 "trtype": "tcp", 00:28:46.260 "traddr": "10.0.0.2", 00:28:46.260 "adrfam": "ipv4", 00:28:46.260 "trsvcid": "4420", 00:28:46.260 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:46.260 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:46.260 "hdgst": false, 00:28:46.260 "ddgst": false 00:28:46.260 }, 00:28:46.260 "method": "bdev_nvme_attach_controller" 00:28:46.260 },{ 00:28:46.260 "params": { 00:28:46.260 "name": "Nvme3", 00:28:46.260 "trtype": "tcp", 00:28:46.260 "traddr": "10.0.0.2", 00:28:46.260 "adrfam": "ipv4", 00:28:46.260 "trsvcid": "4420", 00:28:46.260 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:46.260 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:46.260 "hdgst": false, 00:28:46.260 "ddgst": false 00:28:46.260 }, 00:28:46.260 "method": "bdev_nvme_attach_controller" 00:28:46.260 },{ 00:28:46.260 "params": { 00:28:46.260 "name": "Nvme4", 00:28:46.260 "trtype": "tcp", 00:28:46.260 "traddr": "10.0.0.2", 00:28:46.260 "adrfam": "ipv4", 00:28:46.260 "trsvcid": "4420", 00:28:46.260 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:46.260 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:46.260 "hdgst": false, 00:28:46.260 "ddgst": false 00:28:46.260 }, 00:28:46.260 "method": "bdev_nvme_attach_controller" 00:28:46.260 },{ 00:28:46.260 "params": { 00:28:46.260 "name": "Nvme5", 00:28:46.260 "trtype": "tcp", 00:28:46.260 "traddr": "10.0.0.2", 00:28:46.260 "adrfam": "ipv4", 00:28:46.260 "trsvcid": "4420", 00:28:46.260 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:46.260 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:46.260 "hdgst": false, 00:28:46.260 "ddgst": false 00:28:46.260 }, 00:28:46.260 "method": "bdev_nvme_attach_controller" 00:28:46.260 },{ 00:28:46.260 "params": { 00:28:46.260 "name": "Nvme6", 00:28:46.260 "trtype": "tcp", 00:28:46.260 "traddr": "10.0.0.2", 00:28:46.260 "adrfam": "ipv4", 00:28:46.260 "trsvcid": "4420", 00:28:46.260 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:46.260 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:46.260 "hdgst": false, 00:28:46.260 "ddgst": false 00:28:46.260 }, 00:28:46.260 "method": "bdev_nvme_attach_controller" 00:28:46.260 },{ 00:28:46.260 "params": { 00:28:46.260 "name": "Nvme7", 00:28:46.260 "trtype": "tcp", 00:28:46.260 "traddr": "10.0.0.2", 00:28:46.260 "adrfam": "ipv4", 00:28:46.260 "trsvcid": "4420", 00:28:46.260 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:46.260 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:46.260 "hdgst": false, 00:28:46.260 "ddgst": false 00:28:46.260 }, 00:28:46.260 "method": "bdev_nvme_attach_controller" 00:28:46.260 },{ 00:28:46.260 "params": { 00:28:46.260 "name": "Nvme8", 00:28:46.260 "trtype": "tcp", 00:28:46.260 "traddr": "10.0.0.2", 00:28:46.260 "adrfam": "ipv4", 00:28:46.260 "trsvcid": "4420", 00:28:46.260 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:46.260 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:46.260 "hdgst": false, 00:28:46.260 "ddgst": false 00:28:46.260 }, 00:28:46.260 "method": "bdev_nvme_attach_controller" 00:28:46.260 },{ 00:28:46.260 "params": { 00:28:46.260 "name": "Nvme9", 00:28:46.260 "trtype": "tcp", 00:28:46.260 "traddr": "10.0.0.2", 00:28:46.260 "adrfam": "ipv4", 00:28:46.260 "trsvcid": "4420", 00:28:46.260 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:46.260 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:46.260 "hdgst": false, 00:28:46.260 "ddgst": false 00:28:46.260 }, 00:28:46.260 "method": "bdev_nvme_attach_controller" 00:28:46.260 },{ 00:28:46.260 "params": { 00:28:46.260 "name": "Nvme10", 00:28:46.260 "trtype": "tcp", 00:28:46.260 "traddr": "10.0.0.2", 00:28:46.260 "adrfam": "ipv4", 00:28:46.260 "trsvcid": "4420", 00:28:46.260 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:46.260 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:46.260 "hdgst": false, 00:28:46.260 "ddgst": false 00:28:46.260 }, 00:28:46.260 "method": "bdev_nvme_attach_controller" 00:28:46.260 }' 00:28:46.260 [2024-07-25 13:56:43.000431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.260 [2024-07-25 13:56:43.038557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.638 Running I/O for 10 seconds... 00:28:47.638 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:47.638 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:47.638 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:47.638 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.638 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.897 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.897 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:47.897 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:47.897 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:47.897 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:28:47.897 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:28:47.897 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:47.897 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:47.897 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:47.897 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.897 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:47.897 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.897 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.897 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:47.897 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:47.897 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:48.157 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:48.157 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:48.157 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:48.157 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.157 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:48.157 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:48.157 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.157 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:48.157 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:48.157 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:48.416 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:48.416 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:48.416 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:48.416 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:48.416 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.416 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:48.416 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.416 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:48.416 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:48.416 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:28:48.416 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:28:48.416 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:28:48.416 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 400714 00:28:48.416 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 400714 ']' 00:28:48.416 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 400714 00:28:48.416 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:28:48.675 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:48.675 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 400714 00:28:48.675 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:48.675 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:48.675 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 400714' 00:28:48.675 killing process with pid 400714 00:28:48.675 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 400714 00:28:48.675 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 400714 00:28:48.675 Received shutdown signal, test time was about 0.934480 seconds 00:28:48.675 00:28:48.675 Latency(us) 00:28:48.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.675 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.675 Verification LBA range: start 0x0 length 0x400 00:28:48.675 Nvme1n1 : 0.93 274.14 17.13 0.00 0.00 231214.28 17196.65 221459.25 00:28:48.676 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.676 Verification LBA range: start 0x0 length 0x400 00:28:48.676 Nvme2n1 : 0.90 284.07 17.75 0.00 0.00 219174.09 18979.23 188743.68 00:28:48.676 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.676 Verification LBA range: start 0x0 length 0x400 00:28:48.676 Nvme3n1 : 0.93 343.79 21.49 0.00 0.00 178141.72 16986.93 203843.17 00:28:48.676 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.676 Verification LBA range: start 0x0 length 0x400 00:28:48.676 Nvme4n1 : 0.89 287.12 17.95 0.00 0.00 209210.98 16567.50 201326.59 00:28:48.676 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.676 Verification LBA range: start 0x0 length 0x400 00:28:48.676 Nvme5n1 : 0.91 282.50 17.66 0.00 0.00 209080.93 16986.93 207198.62 00:28:48.676 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.676 Verification LBA range: start 0x0 length 0x400 00:28:48.676 Nvme6n1 : 0.92 278.53 17.41 0.00 0.00 208808.96 18350.08 206359.76 00:28:48.676 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.676 Verification LBA range: start 0x0 length 0x400 00:28:48.676 Nvme7n1 : 0.91 280.19 17.51 0.00 0.00 203570.59 20656.95 192099.12 00:28:48.676 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.676 Verification LBA range: start 0x0 length 0x400 00:28:48.676 Nvme8n1 : 0.91 281.59 17.60 0.00 0.00 198735.46 32925.29 193776.84 00:28:48.676 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.676 Verification LBA range: start 0x0 length 0x400 00:28:48.676 Nvme9n1 : 0.92 277.35 17.33 0.00 0.00 198618.93 19188.94 205520.90 00:28:48.676 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.676 Verification LBA range: start 0x0 length 0x400 00:28:48.676 Nvme10n1 : 0.93 275.14 17.20 0.00 0.00 196772.04 19398.66 233203.30 00:28:48.676 =================================================================================================================== 00:28:48.676 Total : 2864.42 179.03 0.00 0.00 204679.72 16567.50 233203.30 00:28:48.934 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:28:49.872 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 400396 00:28:49.872 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:28:49.872 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:49.872 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:49.872 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:49.872 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:49.872 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:49.872 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:28:49.872 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:49.872 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:28:49.872 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:49.872 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:49.872 rmmod nvme_tcp 00:28:49.872 rmmod nvme_fabrics 00:28:49.872 rmmod nvme_keyring 00:28:49.872 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:49.872 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:28:49.872 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:28:49.872 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 400396 ']' 00:28:49.872 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 400396 00:28:49.872 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 400396 ']' 00:28:49.872 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 400396 00:28:49.872 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:28:49.872 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:49.872 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 400396 00:28:49.872 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:49.872 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:49.872 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 400396' 00:28:49.872 killing process with pid 400396 00:28:49.872 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 400396 00:28:49.872 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 400396 00:28:50.495 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:50.495 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:50.495 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:50.495 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:50.495 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:50.495 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.495 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:50.495 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.402 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:52.402 00:28:52.402 real 0m8.164s 00:28:52.402 user 0m24.516s 00:28:52.402 sys 0m1.710s 00:28:52.402 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:52.402 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.402 ************************************ 00:28:52.402 END TEST nvmf_shutdown_tc2 00:28:52.402 ************************************ 00:28:52.402 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:52.402 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:52.402 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:52.402 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:52.662 ************************************ 00:28:52.662 START TEST nvmf_shutdown_tc3 00:28:52.662 ************************************ 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:52.662 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:52.662 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:52.662 Found net devices under 0000:af:00.0: cvl_0_0 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:52.662 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:52.663 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.663 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:52.663 Found net devices under 0000:af:00.1: cvl_0_1 00:28:52.663 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.663 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:52.663 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:52.663 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:52.663 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:52.663 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:52.663 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:52.663 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:52.663 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:52.663 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:52.663 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:52.663 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:52.663 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:52.663 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:52.663 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:52.663 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:52.663 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:52.663 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:52.663 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:52.663 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:52.663 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:52.663 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:52.663 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:52.922 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:52.922 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:52.922 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:52.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:52.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:28:52.922 00:28:52.922 --- 10.0.0.2 ping statistics --- 00:28:52.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.922 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:28:52.922 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:52.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:52.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:28:52.922 00:28:52.922 --- 10.0.0.1 ping statistics --- 00:28:52.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.922 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:28:52.922 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:52.922 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:28:52.922 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:52.922 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:52.922 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:52.922 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:52.922 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:52.922 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:52.922 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:52.922 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:52.922 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:52.922 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:52.922 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:52.922 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:52.922 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=401897 00:28:52.922 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 401897 00:28:52.922 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 401897 ']' 00:28:52.922 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.922 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:52.922 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.922 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:52.922 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:52.922 [2024-07-25 13:56:49.720021] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:28:52.922 [2024-07-25 13:56:49.720068] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:52.922 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.922 [2024-07-25 13:56:49.761335] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:52.922 [2024-07-25 13:56:49.795983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:53.182 [2024-07-25 13:56:49.835696] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:53.182 [2024-07-25 13:56:49.835740] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:53.182 [2024-07-25 13:56:49.835749] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:53.182 [2024-07-25 13:56:49.835757] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:53.182 [2024-07-25 13:56:49.835780] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:53.182 [2024-07-25 13:56:49.835878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:53.182 [2024-07-25 13:56:49.835973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:53.182 [2024-07-25 13:56:49.835996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:53.182 [2024-07-25 13:56:49.835998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.182 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:53.182 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:28:53.182 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:53.182 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:53.182 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:53.182 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:53.182 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:53.182 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.182 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:53.182 [2024-07-25 13:56:49.983169] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:53.182 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.182 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:53.182 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:53.182 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:53.182 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:53.182 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:53.182 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.182 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:53.182 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.182 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:53.182 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.182 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:53.182 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.182 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:53.182 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.182 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:53.182 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.182 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:53.182 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.182 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:53.182 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.182 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:53.182 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.182 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:53.182 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.182 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:53.182 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:53.182 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.182 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:53.441 Malloc1 00:28:53.441 [2024-07-25 13:56:50.097818] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:53.441 Malloc2 00:28:53.441 Malloc3 00:28:53.441 Malloc4 00:28:53.441 Malloc5 00:28:53.441 Malloc6 00:28:53.441 Malloc7 00:28:53.701 Malloc8 00:28:53.701 Malloc9 00:28:53.701 Malloc10 00:28:53.701 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.701 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:53.701 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:53.701 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:53.701 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=402199 00:28:53.701 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 402199 /var/tmp/bdevperf.sock 00:28:53.701 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 402199 ']' 00:28:53.701 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:53.701 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:53.701 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:53.701 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:53.701 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:53.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:53.701 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:53.701 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:28:53.702 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:53.702 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:28:53.702 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:53.702 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:53.702 { 00:28:53.702 "params": { 00:28:53.702 "name": "Nvme$subsystem", 00:28:53.702 "trtype": "$TEST_TRANSPORT", 00:28:53.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.702 "adrfam": "ipv4", 00:28:53.702 "trsvcid": "$NVMF_PORT", 00:28:53.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.702 "hdgst": ${hdgst:-false}, 00:28:53.702 "ddgst": ${ddgst:-false} 00:28:53.702 }, 00:28:53.702 "method": "bdev_nvme_attach_controller" 00:28:53.702 } 00:28:53.702 EOF 00:28:53.702 )") 00:28:53.702 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:53.702 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:53.702 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:53.702 { 00:28:53.702 "params": { 00:28:53.702 "name": "Nvme$subsystem", 00:28:53.702 "trtype": "$TEST_TRANSPORT", 00:28:53.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.702 "adrfam": "ipv4", 00:28:53.702 "trsvcid": "$NVMF_PORT", 00:28:53.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.702 "hdgst": ${hdgst:-false}, 00:28:53.702 "ddgst": ${ddgst:-false} 00:28:53.702 }, 00:28:53.702 "method": "bdev_nvme_attach_controller" 00:28:53.702 } 00:28:53.702 EOF 00:28:53.702 )") 00:28:53.702 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:53.702 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:53.702 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:53.702 { 00:28:53.702 "params": { 00:28:53.702 "name": "Nvme$subsystem", 00:28:53.702 "trtype": "$TEST_TRANSPORT", 00:28:53.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.702 "adrfam": "ipv4", 00:28:53.702 "trsvcid": "$NVMF_PORT", 00:28:53.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.702 "hdgst": ${hdgst:-false}, 00:28:53.702 "ddgst": ${ddgst:-false} 00:28:53.702 }, 00:28:53.702 "method": "bdev_nvme_attach_controller" 00:28:53.702 } 00:28:53.702 EOF 00:28:53.702 )") 00:28:53.702 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:53.702 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:53.702 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:53.702 { 00:28:53.702 "params": { 00:28:53.702 "name": "Nvme$subsystem", 00:28:53.702 "trtype": "$TEST_TRANSPORT", 00:28:53.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.702 "adrfam": "ipv4", 00:28:53.702 "trsvcid": "$NVMF_PORT", 00:28:53.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.702 "hdgst": ${hdgst:-false}, 00:28:53.702 "ddgst": ${ddgst:-false} 00:28:53.702 }, 00:28:53.702 "method": "bdev_nvme_attach_controller" 00:28:53.702 } 00:28:53.702 EOF 00:28:53.702 )") 00:28:53.702 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:53.702 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:53.702 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:53.702 { 00:28:53.702 "params": { 00:28:53.702 "name": "Nvme$subsystem", 00:28:53.702 "trtype": "$TEST_TRANSPORT", 00:28:53.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.702 "adrfam": "ipv4", 00:28:53.702 "trsvcid": "$NVMF_PORT", 00:28:53.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.702 "hdgst": ${hdgst:-false}, 00:28:53.702 "ddgst": ${ddgst:-false} 00:28:53.702 }, 00:28:53.702 "method": "bdev_nvme_attach_controller" 00:28:53.702 } 00:28:53.702 EOF 00:28:53.702 )") 00:28:53.702 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:53.702 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:53.702 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:53.702 { 00:28:53.702 "params": { 00:28:53.702 "name": "Nvme$subsystem", 00:28:53.702 "trtype": "$TEST_TRANSPORT", 00:28:53.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.702 "adrfam": "ipv4", 00:28:53.702 "trsvcid": "$NVMF_PORT", 00:28:53.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.702 "hdgst": ${hdgst:-false}, 00:28:53.702 "ddgst": ${ddgst:-false} 00:28:53.702 }, 00:28:53.702 "method": "bdev_nvme_attach_controller" 00:28:53.702 } 00:28:53.702 EOF 00:28:53.702 )") 00:28:53.702 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:53.702 [2024-07-25 13:56:50.579590] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:28:53.702 [2024-07-25 13:56:50.579646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid402199 ] 00:28:53.702 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:53.702 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:53.702 { 00:28:53.702 "params": { 00:28:53.702 "name": "Nvme$subsystem", 00:28:53.702 "trtype": "$TEST_TRANSPORT", 00:28:53.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.702 "adrfam": "ipv4", 00:28:53.702 "trsvcid": "$NVMF_PORT", 00:28:53.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.702 "hdgst": ${hdgst:-false}, 00:28:53.702 "ddgst": ${ddgst:-false} 00:28:53.702 }, 00:28:53.702 "method": "bdev_nvme_attach_controller" 00:28:53.702 } 00:28:53.702 EOF 00:28:53.702 )") 00:28:53.702 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:53.962 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:53.962 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:53.962 { 00:28:53.962 "params": { 00:28:53.962 "name": "Nvme$subsystem", 00:28:53.962 "trtype": "$TEST_TRANSPORT", 00:28:53.962 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.962 "adrfam": "ipv4", 00:28:53.962 "trsvcid": "$NVMF_PORT", 00:28:53.962 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.962 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.962 "hdgst": ${hdgst:-false}, 00:28:53.962 "ddgst": ${ddgst:-false} 00:28:53.962 }, 00:28:53.962 "method": "bdev_nvme_attach_controller" 00:28:53.962 } 00:28:53.962 EOF 00:28:53.962 )") 00:28:53.962 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:53.962 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:53.962 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:53.962 { 00:28:53.962 "params": { 00:28:53.962 "name": "Nvme$subsystem", 00:28:53.962 "trtype": "$TEST_TRANSPORT", 00:28:53.962 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.962 "adrfam": "ipv4", 00:28:53.962 "trsvcid": "$NVMF_PORT", 00:28:53.962 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.962 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.962 "hdgst": ${hdgst:-false}, 00:28:53.962 "ddgst": ${ddgst:-false} 00:28:53.962 }, 00:28:53.962 "method": "bdev_nvme_attach_controller" 00:28:53.962 } 00:28:53.962 EOF 00:28:53.962 )") 00:28:53.962 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:53.962 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:53.962 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:53.962 { 00:28:53.962 "params": { 00:28:53.962 "name": "Nvme$subsystem", 00:28:53.962 "trtype": "$TEST_TRANSPORT", 00:28:53.962 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.962 "adrfam": "ipv4", 00:28:53.962 "trsvcid": "$NVMF_PORT", 00:28:53.962 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.962 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.962 "hdgst": ${hdgst:-false}, 00:28:53.962 "ddgst": ${ddgst:-false} 00:28:53.962 }, 00:28:53.962 "method": "bdev_nvme_attach_controller" 00:28:53.962 } 00:28:53.962 EOF 00:28:53.962 )") 00:28:53.962 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:53.962 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.962 [2024-07-25 13:56:50.617061] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:53.962 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:28:53.962 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:28:53.962 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:53.962 "params": { 00:28:53.962 "name": "Nvme1", 00:28:53.962 "trtype": "tcp", 00:28:53.962 "traddr": "10.0.0.2", 00:28:53.962 "adrfam": "ipv4", 00:28:53.962 "trsvcid": "4420", 00:28:53.962 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:53.962 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:53.962 "hdgst": false, 00:28:53.962 "ddgst": false 00:28:53.962 }, 00:28:53.962 "method": "bdev_nvme_attach_controller" 00:28:53.962 },{ 00:28:53.962 "params": { 00:28:53.962 "name": "Nvme2", 00:28:53.962 "trtype": "tcp", 00:28:53.962 "traddr": "10.0.0.2", 00:28:53.962 "adrfam": "ipv4", 00:28:53.962 "trsvcid": "4420", 00:28:53.962 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:53.962 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:53.962 "hdgst": false, 00:28:53.962 "ddgst": false 00:28:53.962 }, 00:28:53.962 "method": "bdev_nvme_attach_controller" 00:28:53.962 },{ 00:28:53.962 "params": { 00:28:53.962 "name": "Nvme3", 00:28:53.962 "trtype": "tcp", 00:28:53.962 "traddr": "10.0.0.2", 00:28:53.962 "adrfam": "ipv4", 00:28:53.962 "trsvcid": "4420", 00:28:53.962 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:53.962 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:53.962 "hdgst": false, 00:28:53.962 "ddgst": false 00:28:53.962 }, 00:28:53.962 "method": "bdev_nvme_attach_controller" 00:28:53.962 },{ 00:28:53.962 "params": { 00:28:53.962 "name": "Nvme4", 00:28:53.962 "trtype": "tcp", 00:28:53.962 "traddr": "10.0.0.2", 00:28:53.962 "adrfam": "ipv4", 00:28:53.962 "trsvcid": "4420", 00:28:53.962 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:53.962 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:53.962 "hdgst": false, 00:28:53.962 "ddgst": false 00:28:53.962 }, 00:28:53.962 "method": "bdev_nvme_attach_controller" 00:28:53.962 },{ 00:28:53.962 "params": { 00:28:53.962 "name": "Nvme5", 00:28:53.962 "trtype": "tcp", 00:28:53.962 "traddr": "10.0.0.2", 00:28:53.962 "adrfam": "ipv4", 00:28:53.963 "trsvcid": "4420", 00:28:53.963 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:53.963 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:53.963 "hdgst": false, 00:28:53.963 "ddgst": false 00:28:53.963 }, 00:28:53.963 "method": "bdev_nvme_attach_controller" 00:28:53.963 },{ 00:28:53.963 "params": { 00:28:53.963 "name": "Nvme6", 00:28:53.963 "trtype": "tcp", 00:28:53.963 "traddr": "10.0.0.2", 00:28:53.963 "adrfam": "ipv4", 00:28:53.963 "trsvcid": "4420", 00:28:53.963 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:53.963 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:53.963 "hdgst": false, 00:28:53.963 "ddgst": false 00:28:53.963 }, 00:28:53.963 "method": "bdev_nvme_attach_controller" 00:28:53.963 },{ 00:28:53.963 "params": { 00:28:53.963 "name": "Nvme7", 00:28:53.963 "trtype": "tcp", 00:28:53.963 "traddr": "10.0.0.2", 00:28:53.963 "adrfam": "ipv4", 00:28:53.963 "trsvcid": "4420", 00:28:53.963 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:53.963 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:53.963 "hdgst": false, 00:28:53.963 "ddgst": false 00:28:53.963 }, 00:28:53.963 "method": "bdev_nvme_attach_controller" 00:28:53.963 },{ 00:28:53.963 "params": { 00:28:53.963 "name": "Nvme8", 00:28:53.963 "trtype": "tcp", 00:28:53.963 "traddr": "10.0.0.2", 00:28:53.963 "adrfam": "ipv4", 00:28:53.963 "trsvcid": "4420", 00:28:53.963 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:53.963 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:53.963 "hdgst": false, 00:28:53.963 "ddgst": false 00:28:53.963 }, 00:28:53.963 "method": "bdev_nvme_attach_controller" 00:28:53.963 },{ 00:28:53.963 "params": { 00:28:53.963 "name": "Nvme9", 00:28:53.963 "trtype": "tcp", 00:28:53.963 "traddr": "10.0.0.2", 00:28:53.963 "adrfam": "ipv4", 00:28:53.963 "trsvcid": "4420", 00:28:53.963 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:53.963 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:53.963 "hdgst": false, 00:28:53.963 "ddgst": false 00:28:53.963 }, 00:28:53.963 "method": "bdev_nvme_attach_controller" 00:28:53.963 },{ 00:28:53.963 "params": { 00:28:53.963 "name": "Nvme10", 00:28:53.963 "trtype": "tcp", 00:28:53.963 "traddr": "10.0.0.2", 00:28:53.963 "adrfam": "ipv4", 00:28:53.963 "trsvcid": "4420", 00:28:53.963 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:53.963 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:53.963 "hdgst": false, 00:28:53.963 "ddgst": false 00:28:53.963 }, 00:28:53.963 "method": "bdev_nvme_attach_controller" 00:28:53.963 }' 00:28:53.963 [2024-07-25 13:56:50.652904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.963 [2024-07-25 13:56:50.690958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.342 Running I/O for 10 seconds... 00:28:55.342 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:55.342 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:28:55.342 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:55.342 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.342 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:55.342 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.342 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:55.342 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:55.342 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:55.342 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:55.342 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:28:55.342 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:28:55.342 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:55.342 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:55.342 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:55.342 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.342 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:55.342 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:55.342 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.342 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:55.342 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:55.342 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:55.601 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:55.601 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:55.601 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:55.601 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:55.601 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.601 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:55.861 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.861 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:55.861 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:55.861 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:56.138 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:56.138 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:56.138 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:56.138 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:56.138 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.138 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:56.138 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.138 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=195 00:28:56.138 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:28:56.138 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:28:56.138 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:28:56.138 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:28:56.138 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 401897 00:28:56.138 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 401897 ']' 00:28:56.138 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 401897 00:28:56.138 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:28:56.138 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:56.138 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 401897 00:28:56.138 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:56.138 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:56.138 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 401897' 00:28:56.138 killing process with pid 401897 00:28:56.138 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 401897 00:28:56.138 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 401897 00:28:56.138 [2024-07-25 13:56:52.898962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.138 [2024-07-25 13:56:52.899018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.138 [2024-07-25 13:56:52.899028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.138 [2024-07-25 13:56:52.899037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.138 [2024-07-25 13:56:52.899047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.138 [2024-07-25 13:56:52.899056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.138 [2024-07-25 13:56:52.899065] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.138 [2024-07-25 13:56:52.899074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.138 [2024-07-25 13:56:52.899083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.138 [2024-07-25 13:56:52.899092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.138 [2024-07-25 13:56:52.899100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.138 [2024-07-25 13:56:52.899109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.138 [2024-07-25 13:56:52.899118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.138 [2024-07-25 13:56:52.899126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.138 [2024-07-25 13:56:52.899135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.138 [2024-07-25 13:56:52.899144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.138 [2024-07-25 13:56:52.899152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.138 [2024-07-25 13:56:52.899161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.138 [2024-07-25 13:56:52.899170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.138 [2024-07-25 13:56:52.899179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.899559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746f50 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.900620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a991f0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.900648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a991f0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.901469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747410 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.901481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747410 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.901491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747410 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.139 [2024-07-25 13:56:52.902882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.902891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.902901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.902910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.902919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.902927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.902936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.902947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.902956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.902964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.902973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.902982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.902991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.903000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.903008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.903017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.903026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.903034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.903042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.903051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.903060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.903068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.903076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.903085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.903093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.903102] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.903111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.903119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.903127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.903136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.903144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.903153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17478d0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.140 [2024-07-25 13:56:52.904575] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.904583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.904592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.904600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.904609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.904617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.904626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.904634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.904642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.904651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.904661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.904670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.904678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.904687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.904695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.904704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747db0 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.905959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748270 is same with the state(5) to be set 00:28:56.141 [2024-07-25 13:56:52.906803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.906816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.906825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.906834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.906843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.906853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.906862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.906870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.906880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.906889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.906897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.906906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.906920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.906929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.906937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.906946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.906955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.906964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.906974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.906983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.906993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907102] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907201] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.907364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748750 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.909041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.909056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.909065] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.909079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.909087] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.909096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.909105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.909114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.909123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.909132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.909141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.909150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.909158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.909167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.909175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.909184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.909192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.142 [2024-07-25 13:56:52.909201] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909575] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.909583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98850 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.910147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.910161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.910170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.910179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.910188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.910198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.910206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.910215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.910223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.910232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.910240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.910249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.910257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.910265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.910274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.910283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.910291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.910300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.910309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.910317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.910325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.143 [2024-07-25 13:56:52.910334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.910686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98d10 is same with the state(5) to be set 00:28:56.144 [2024-07-25 13:56:52.911875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.144 [2024-07-25 13:56:52.911909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.144 [2024-07-25 13:56:52.911927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.144 [2024-07-25 13:56:52.911937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.144 [2024-07-25 13:56:52.911948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.144 [2024-07-25 13:56:52.911957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.144 [2024-07-25 13:56:52.911968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.144 [2024-07-25 13:56:52.911978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.144 [2024-07-25 13:56:52.911988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.144 [2024-07-25 13:56:52.911998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.144 [2024-07-25 13:56:52.912008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.144 [2024-07-25 13:56:52.912017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.144 [2024-07-25 13:56:52.912031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.144 [2024-07-25 13:56:52.912040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.144 [2024-07-25 13:56:52.912051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.144 [2024-07-25 13:56:52.912060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.144 [2024-07-25 13:56:52.912071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.144 [2024-07-25 13:56:52.912080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.144 [2024-07-25 13:56:52.912090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.144 [2024-07-25 13:56:52.912099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.144 [2024-07-25 13:56:52.912110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.144 [2024-07-25 13:56:52.912119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.144 [2024-07-25 13:56:52.912129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.144 [2024-07-25 13:56:52.912138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.144 [2024-07-25 13:56:52.912149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.144 [2024-07-25 13:56:52.912158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.144 [2024-07-25 13:56:52.912168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.144 [2024-07-25 13:56:52.912177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.144 [2024-07-25 13:56:52.912187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.144 [2024-07-25 13:56:52.912196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.144 [2024-07-25 13:56:52.912207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.144 [2024-07-25 13:56:52.912216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.144 [2024-07-25 13:56:52.912227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.144 [2024-07-25 13:56:52.912236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.144 [2024-07-25 13:56:52.912247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.144 [2024-07-25 13:56:52.912256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.144 [2024-07-25 13:56:52.912267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.144 [2024-07-25 13:56:52.912277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.144 [2024-07-25 13:56:52.912288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.144 [2024-07-25 13:56:52.912297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.912983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.912992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.913002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.913011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.913024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.913033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.913043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.913052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.145 [2024-07-25 13:56:52.913062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.145 [2024-07-25 13:56:52.913071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.913082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.146 [2024-07-25 13:56:52.913091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.913101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.146 [2024-07-25 13:56:52.913110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.913121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.146 [2024-07-25 13:56:52.913130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.913140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.146 [2024-07-25 13:56:52.913149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.913159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.146 [2024-07-25 13:56:52.913168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.913197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:56.146 [2024-07-25 13:56:52.913581] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1526e40 was disconnected and freed. reset controller. 00:28:56.146 [2024-07-25 13:56:52.913630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.913641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.913651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.913660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.913669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.913679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.913688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.913699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.913708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7630 is same with the state(5) to be set 00:28:56.146 [2024-07-25 13:56:52.913745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.913755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.913765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.913774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.913783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.913792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.913801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.913810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.913819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159f2d0 is same with the state(5) to be set 00:28:56.146 [2024-07-25 13:56:52.913847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.913858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.913867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.913876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.913886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.913894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.913904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.913913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.913922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a53e0 is same with the state(5) to be set 00:28:56.146 [2024-07-25 13:56:52.913945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.913955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.913965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.913974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.913986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.913994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.914006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.914015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.914024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142f260 is same with the state(5) to be set 00:28:56.146 [2024-07-25 13:56:52.914051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.914061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.914071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.914080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.914090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.914098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.914108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.914117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.914125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c490 is same with the state(5) to be set 00:28:56.146 [2024-07-25 13:56:52.914150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.914161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.914170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.914179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.914189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.914198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.914207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.914216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.914225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6030 is same with the state(5) to be set 00:28:56.146 [2024-07-25 13:56:52.914252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.914262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.914272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.914280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.914292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.914300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.914310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.914319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.914327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8050 is same with the state(5) to be set 00:28:56.146 [2024-07-25 13:56:52.914354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.914364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.146 [2024-07-25 13:56:52.914373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.146 [2024-07-25 13:56:52.914382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.914392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.147 [2024-07-25 13:56:52.914401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.914410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.147 [2024-07-25 13:56:52.914419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.914428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c55f0 is same with the state(5) to be set 00:28:56.147 [2024-07-25 13:56:52.914453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.147 [2024-07-25 13:56:52.914463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.914472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.147 [2024-07-25 13:56:52.914481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.914491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.147 [2024-07-25 13:56:52.914500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.914509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.147 [2024-07-25 13:56:52.914517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.914526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf02610 is same with the state(5) to be set 00:28:56.147 [2024-07-25 13:56:52.914552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.147 [2024-07-25 13:56:52.914562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.914571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.147 [2024-07-25 13:56:52.914583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.914593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.147 [2024-07-25 13:56:52.914602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.914612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.147 [2024-07-25 13:56:52.914620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.914629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438880 is same with the state(5) to be set 00:28:56.147 [2024-07-25 13:56:52.914731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.147 [2024-07-25 13:56:52.914746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.914761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.147 [2024-07-25 13:56:52.914771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.914782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.147 [2024-07-25 13:56:52.914793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.914804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.147 [2024-07-25 13:56:52.914813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.914824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.147 [2024-07-25 13:56:52.914833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.914843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.147 [2024-07-25 13:56:52.914852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.914863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.147 [2024-07-25 13:56:52.914872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.914882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.147 [2024-07-25 13:56:52.914891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.914902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.147 [2024-07-25 13:56:52.914911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.914922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.147 [2024-07-25 13:56:52.914933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.914944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.147 [2024-07-25 13:56:52.914953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.914963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.147 [2024-07-25 13:56:52.914972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.914982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.147 [2024-07-25 13:56:52.914991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.915001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.147 [2024-07-25 13:56:52.915010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.915021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.147 [2024-07-25 13:56:52.915030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.915040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.147 [2024-07-25 13:56:52.915049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.915059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.147 [2024-07-25 13:56:52.915068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.915079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.147 [2024-07-25 13:56:52.915087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.915098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.147 [2024-07-25 13:56:52.915109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.915119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.147 [2024-07-25 13:56:52.915128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.915138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.147 [2024-07-25 13:56:52.915148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.147 [2024-07-25 13:56:52.915158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.915167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.915179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.915188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.915199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.915208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.915218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.915227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.915238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.915246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.915257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.915266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.915276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.915285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.915295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.915304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.915314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.924963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.924983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.924993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.148 [2024-07-25 13:56:52.925602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.148 [2024-07-25 13:56:52.925614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.925623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.925634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.925644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.925655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.925664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.925675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.925685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.925767] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14baa60 was disconnected and freed. reset controller. 00:28:56.149 [2024-07-25 13:56:52.926051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.149 [2024-07-25 13:56:52.926798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.149 [2024-07-25 13:56:52.926809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.926818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.926830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.926839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.926850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.926860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.926871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.926880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.926891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.926900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.926911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.926921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.926932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.926941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.926952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.926962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.926973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.926983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.926994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.927004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.927016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.927025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.927036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.927047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.927057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.927067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.927078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.927088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.927099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.927109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.927120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.927129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.927140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.927150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.927161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.927171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.927182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.927191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.927202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.927212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.927223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.927233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.927244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.927253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.927264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.927279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.927290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.927300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.927311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.927320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.927332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.927341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.927352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.927362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.927373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.927382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.927394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.150 [2024-07-25 13:56:52.927403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.150 [2024-07-25 13:56:52.927471] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d399c0 was disconnected and freed. reset controller. 00:28:56.150 [2024-07-25 13:56:52.928571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d7630 (9): Bad file descriptor 00:28:56.150 [2024-07-25 13:56:52.928602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159f2d0 (9): Bad file descriptor 00:28:56.150 [2024-07-25 13:56:52.928618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a53e0 (9): Bad file descriptor 00:28:56.150 [2024-07-25 13:56:52.928634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x142f260 (9): Bad file descriptor 00:28:56.150 [2024-07-25 13:56:52.928653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140c490 (9): Bad file descriptor 00:28:56.150 [2024-07-25 13:56:52.928671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d6030 (9): Bad file descriptor 00:28:56.150 [2024-07-25 13:56:52.928687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d8050 (9): Bad file descriptor 00:28:56.150 [2024-07-25 13:56:52.928702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c55f0 (9): Bad file descriptor 00:28:56.150 [2024-07-25 13:56:52.928726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf02610 (9): Bad file descriptor 00:28:56.150 [2024-07-25 13:56:52.928744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1438880 (9): Bad file descriptor 00:28:56.150 [2024-07-25 13:56:52.930997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:56.150 [2024-07-25 13:56:52.931032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:56.150 [2024-07-25 13:56:52.931762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:56.150 [2024-07-25 13:56:52.932138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-07-25 13:56:52.932161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a53e0 with addr=10.0.0.2, port=4420 00:28:56.150 [2024-07-25 13:56:52.932175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a53e0 is same with the state(5) to be set 00:28:56.150 [2024-07-25 13:56:52.932381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-07-25 13:56:52.932398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d8050 with addr=10.0.0.2, port=4420 00:28:56.150 [2024-07-25 13:56:52.932411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8050 is same with the state(5) to be set 00:28:56.150 [2024-07-25 13:56:52.932799] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:56.150 [2024-07-25 13:56:52.932858] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:56.150 [2024-07-25 13:56:52.932913] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:56.150 [2024-07-25 13:56:52.932969] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:56.150 [2024-07-25 13:56:52.933297] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:56.151 [2024-07-25 13:56:52.933353] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:56.151 [2024-07-25 13:56:52.933577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-07-25 13:56:52.933597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d7630 with addr=10.0.0.2, port=4420 00:28:56.151 [2024-07-25 13:56:52.933611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7630 is same with the state(5) to be set 00:28:56.151 [2024-07-25 13:56:52.933629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a53e0 (9): Bad file descriptor 00:28:56.151 [2024-07-25 13:56:52.933645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d8050 (9): Bad file descriptor 00:28:56.151 [2024-07-25 13:56:52.933712] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:56.151 [2024-07-25 13:56:52.933854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d7630 (9): Bad file descriptor 00:28:56.151 [2024-07-25 13:56:52.933873] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:56.151 [2024-07-25 13:56:52.933886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:56.151 [2024-07-25 13:56:52.933900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:56.151 [2024-07-25 13:56:52.933918] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:56.151 [2024-07-25 13:56:52.933930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:56.151 [2024-07-25 13:56:52.933942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:56.151 [2024-07-25 13:56:52.934018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.151 [2024-07-25 13:56:52.934032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.151 [2024-07-25 13:56:52.934043] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:56.151 [2024-07-25 13:56:52.934054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:56.151 [2024-07-25 13:56:52.934066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:56.151 [2024-07-25 13:56:52.934121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.151 [2024-07-25 13:56:52.938755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.938782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.938802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.938815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.938830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.938847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.938862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.938874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.938889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.938901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.938916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.938928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.938943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.938956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.938971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.938983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.938998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.939011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.939025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.939038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.939052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.939065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.939080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.939092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.939107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.939119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.939136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.939149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.939163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.939176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.939191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.939203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.939218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.939230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.939245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.939257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.939272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.939284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.939299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.939311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.939326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.939339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.939353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.939366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.939381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.939393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.939408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.939420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.939435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.939447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.939462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.939477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.939492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.939504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.939518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.939531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.939546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.939558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.151 [2024-07-25 13:56:52.939573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.151 [2024-07-25 13:56:52.939585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.939600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.939613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.939627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.939640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.939655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.939667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.939682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.939695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.939709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.939729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.939744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.939757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.939771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.939784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.939799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.939812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.939828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.939841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.939855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.939868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.939882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.939895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.939909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.939922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.939936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.939949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.939963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.939976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.939990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.940003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.940017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.940030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.940045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.940057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.940071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.940084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.940099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.940111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.940126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.940138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.940153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.940167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.940182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.940194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.940209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.940222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.940237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.940249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.940264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.940277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.940292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.940304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.940319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.940332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.940347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.940359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.940374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.940387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.940401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.940414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.940429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.940441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.940456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.940468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.940483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.152 [2024-07-25 13:56:52.940496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.152 [2024-07-25 13:56:52.940512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.940525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.940539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152d950 is same with the state(5) to be set 00:28:56.153 [2024-07-25 13:56:52.941597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.941611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.941625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.941634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.941645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.941654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.941665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.941674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.941685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.941695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.941705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.941717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.941728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.941737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.941748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.941757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.941768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.941777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.941787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.941796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.941807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.941816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.941829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.941838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.941849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.941858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.941869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.941878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.941888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.941897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.941908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.941917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.941927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.941937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.941947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.941956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.941966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.941975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.941986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.941995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.942005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.942014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.942024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.942034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.942044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.942053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.942064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.942075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.942086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.942094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.942105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.942114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.942125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.942134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.153 [2024-07-25 13:56:52.942144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.153 [2024-07-25 13:56:52.942153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.154 [2024-07-25 13:56:52.942164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.154 [2024-07-25 13:56:52.942172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.154 [2024-07-25 13:56:52.942184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.154 [2024-07-25 13:56:52.942193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.154 [2024-07-25 13:56:52.942204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.154 [2024-07-25 13:56:52.942213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.154 [2024-07-25 13:56:52.942224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.154 [2024-07-25 13:56:52.942233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.154 [2024-07-25 13:56:52.942243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.154 [2024-07-25 13:56:52.942252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.154 [2024-07-25 13:56:52.942262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.154 [2024-07-25 13:56:52.942271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.154 [2024-07-25 13:56:52.942282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.154 [2024-07-25 13:56:52.942291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.154 [2024-07-25 13:56:52.942302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.154 [2024-07-25 13:56:52.942310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.154 [2024-07-25 13:56:52.942323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.154 [2024-07-25 13:56:52.942331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.154 [2024-07-25 13:56:52.942342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.154 [2024-07-25 13:56:52.942351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.154 [2024-07-25 13:56:52.942362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.154 [2024-07-25 13:56:52.942371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.154 [2024-07-25 13:56:52.942381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.154 [2024-07-25 13:56:52.942390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.154 [2024-07-25 13:56:52.942400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.154 [2024-07-25 13:56:52.942409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.154 [2024-07-25 13:56:52.942420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.154 [2024-07-25 13:56:52.942429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.154 [2024-07-25 13:56:52.942439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.154 [2024-07-25 13:56:52.942448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.154 [2024-07-25 13:56:52.942459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.154 [2024-07-25 13:56:52.942468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.154 [2024-07-25 13:56:52.942478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.154 [2024-07-25 13:56:52.942487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.154 [2024-07-25 13:56:52.942498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.154 [2024-07-25 13:56:52.942507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.154 [2024-07-25 13:56:52.942518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.154 [2024-07-25 13:56:52.942527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.154 [2024-07-25 13:56:52.942537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.154 [2024-07-25 13:56:52.942546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.154 [2024-07-25 13:56:52.942557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.154 [2024-07-25 13:56:52.942567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.154 [2024-07-25 13:56:52.942578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.154 [2024-07-25 13:56:52.942587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.154 [2024-07-25 13:56:52.942597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.154 [2024-07-25 13:56:52.942607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.154 [2024-07-25 13:56:52.942617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.154 [2024-07-25 13:56:52.942626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.154 [2024-07-25 13:56:52.942636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.154 [2024-07-25 13:56:52.942645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.154 [2024-07-25 13:56:52.942656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.154 [2024-07-25 13:56:52.942665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.154 [2024-07-25 13:56:52.942675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.155 [2024-07-25 13:56:52.942685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.155 [2024-07-25 13:56:52.942695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.155 [2024-07-25 13:56:52.942704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.155 [2024-07-25 13:56:52.942717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.155 [2024-07-25 13:56:52.942726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.155 [2024-07-25 13:56:52.942737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.155 [2024-07-25 13:56:52.942746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.155 [2024-07-25 13:56:52.942756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.155 [2024-07-25 13:56:52.942766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.155 [2024-07-25 13:56:52.942777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.155 [2024-07-25 13:56:52.942786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.155 [2024-07-25 13:56:52.942796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.155 [2024-07-25 13:56:52.942805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.155 [2024-07-25 13:56:52.942817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.155 [2024-07-25 13:56:52.942827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.155 [2024-07-25 13:56:52.942837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.155 [2024-07-25 13:56:52.942846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.155 [2024-07-25 13:56:52.942856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.155 [2024-07-25 13:56:52.942866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.155 [2024-07-25 13:56:52.942876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bbc20 is same with the state(5) to be set 00:28:56.155 [2024-07-25 13:56:52.943848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.155 [2024-07-25 13:56:52.943862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.155 [2024-07-25 13:56:52.943875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.155 [2024-07-25 13:56:52.943884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.155 [2024-07-25 13:56:52.943895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.155 [2024-07-25 13:56:52.943904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.155 [2024-07-25 13:56:52.943915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.155 [2024-07-25 13:56:52.943924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.155 [2024-07-25 13:56:52.943934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.155 [2024-07-25 13:56:52.943943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.155 [2024-07-25 13:56:52.943954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.155 [2024-07-25 13:56:52.943963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.155 [2024-07-25 13:56:52.943973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.155 [2024-07-25 13:56:52.943982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.155 [2024-07-25 13:56:52.943993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.155 [2024-07-25 13:56:52.944002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.155 [2024-07-25 13:56:52.944012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.155 [2024-07-25 13:56:52.944022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.155 [2024-07-25 13:56:52.944035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.155 [2024-07-25 13:56:52.944044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.155 [2024-07-25 13:56:52.944054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.155 [2024-07-25 13:56:52.944063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.155 [2024-07-25 13:56:52.944074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.155 [2024-07-25 13:56:52.944083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.155 [2024-07-25 13:56:52.944093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.155 [2024-07-25 13:56:52.944102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.155 [2024-07-25 13:56:52.944113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.155 [2024-07-25 13:56:52.944122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.155 [2024-07-25 13:56:52.944133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.155 [2024-07-25 13:56:52.944142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.155 [2024-07-25 13:56:52.944152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.155 [2024-07-25 13:56:52.944161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.155 [2024-07-25 13:56:52.944172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.155 [2024-07-25 13:56:52.944181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.156 [2024-07-25 13:56:52.944191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.156 [2024-07-25 13:56:52.944200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.156 [2024-07-25 13:56:52.944210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.156 [2024-07-25 13:56:52.944219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.156 [2024-07-25 13:56:52.944230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.156 [2024-07-25 13:56:52.944238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.156 [2024-07-25 13:56:52.944249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.156 [2024-07-25 13:56:52.944258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.156 [2024-07-25 13:56:52.944268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.156 [2024-07-25 13:56:52.944279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.156 [2024-07-25 13:56:52.944289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.156 [2024-07-25 13:56:52.944298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.156 [2024-07-25 13:56:52.944309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.156 [2024-07-25 13:56:52.944318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.156 [2024-07-25 13:56:52.944329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.156 [2024-07-25 13:56:52.944338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.156 [2024-07-25 13:56:52.944348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.156 [2024-07-25 13:56:52.944358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.156 [2024-07-25 13:56:52.944368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.156 [2024-07-25 13:56:52.944377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.156 [2024-07-25 13:56:52.944388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.156 [2024-07-25 13:56:52.944397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.156 [2024-07-25 13:56:52.944408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.156 [2024-07-25 13:56:52.944417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.156 [2024-07-25 13:56:52.944428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.156 [2024-07-25 13:56:52.944436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.156 [2024-07-25 13:56:52.944447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.156 [2024-07-25 13:56:52.944456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.156 [2024-07-25 13:56:52.944466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.156 [2024-07-25 13:56:52.944475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.156 [2024-07-25 13:56:52.944485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.156 [2024-07-25 13:56:52.944494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.156 [2024-07-25 13:56:52.944505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.156 [2024-07-25 13:56:52.944514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.156 [2024-07-25 13:56:52.944524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.156 [2024-07-25 13:56:52.944538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.156 [2024-07-25 13:56:52.944549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.156 [2024-07-25 13:56:52.944558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.156 [2024-07-25 13:56:52.944568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.156 [2024-07-25 13:56:52.944577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.156 [2024-07-25 13:56:52.944588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.156 [2024-07-25 13:56:52.944597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.156 [2024-07-25 13:56:52.944607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.156 [2024-07-25 13:56:52.944616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.156 [2024-07-25 13:56:52.944627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.156 [2024-07-25 13:56:52.944636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.156 [2024-07-25 13:56:52.944647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.156 [2024-07-25 13:56:52.944655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.156 [2024-07-25 13:56:52.944666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.156 [2024-07-25 13:56:52.944675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.156 [2024-07-25 13:56:52.944685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.156 [2024-07-25 13:56:52.944695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.156 [2024-07-25 13:56:52.944705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.156 [2024-07-25 13:56:52.944717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.157 [2024-07-25 13:56:52.944728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.157 [2024-07-25 13:56:52.944737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.157 [2024-07-25 13:56:52.944747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.157 [2024-07-25 13:56:52.944756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.157 [2024-07-25 13:56:52.944767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.157 [2024-07-25 13:56:52.944776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.157 [2024-07-25 13:56:52.944787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.157 [2024-07-25 13:56:52.944796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.157 [2024-07-25 13:56:52.944807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.157 [2024-07-25 13:56:52.944816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.157 [2024-07-25 13:56:52.944826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.157 [2024-07-25 13:56:52.944835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.157 [2024-07-25 13:56:52.944846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.157 [2024-07-25 13:56:52.944855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.157 [2024-07-25 13:56:52.944865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.157 [2024-07-25 13:56:52.944874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.157 [2024-07-25 13:56:52.944885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.157 [2024-07-25 13:56:52.944894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.157 [2024-07-25 13:56:52.944904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.157 [2024-07-25 13:56:52.944913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.157 [2024-07-25 13:56:52.944924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.157 [2024-07-25 13:56:52.944933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.157 [2024-07-25 13:56:52.944943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.157 [2024-07-25 13:56:52.944953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.157 [2024-07-25 13:56:52.944963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.157 [2024-07-25 13:56:52.944972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.157 [2024-07-25 13:56:52.944982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.157 [2024-07-25 13:56:52.944991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.157 [2024-07-25 13:56:52.945002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.157 [2024-07-25 13:56:52.945011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.157 [2024-07-25 13:56:52.945021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.157 [2024-07-25 13:56:52.945031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.157 [2024-07-25 13:56:52.945042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.157 [2024-07-25 13:56:52.945051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.157 [2024-07-25 13:56:52.945062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.157 [2024-07-25 13:56:52.945070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.157 [2024-07-25 13:56:52.945081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.157 [2024-07-25 13:56:52.945090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.157 [2024-07-25 13:56:52.945101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.157 [2024-07-25 13:56:52.945110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.157 [2024-07-25 13:56:52.945119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1405810 is same with the state(5) to be set 00:28:56.157 [2024-07-25 13:56:52.946086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.157 [2024-07-25 13:56:52.946099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.157 [2024-07-25 13:56:52.946112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.157 [2024-07-25 13:56:52.946121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.157 [2024-07-25 13:56:52.946132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.157 [2024-07-25 13:56:52.946141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.157 [2024-07-25 13:56:52.946151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.157 [2024-07-25 13:56:52.946161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.157 [2024-07-25 13:56:52.946171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.157 [2024-07-25 13:56:52.946181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.157 [2024-07-25 13:56:52.946191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.158 [2024-07-25 13:56:52.946200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.158 [2024-07-25 13:56:52.946211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.158 [2024-07-25 13:56:52.946220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.158 [2024-07-25 13:56:52.946230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.158 [2024-07-25 13:56:52.946242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.158 [2024-07-25 13:56:52.946252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.158 [2024-07-25 13:56:52.946261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.158 [2024-07-25 13:56:52.946272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.158 [2024-07-25 13:56:52.946281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.158 [2024-07-25 13:56:52.946291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.158 [2024-07-25 13:56:52.946300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.158 [2024-07-25 13:56:52.946311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.158 [2024-07-25 13:56:52.946320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.158 [2024-07-25 13:56:52.946330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.158 [2024-07-25 13:56:52.946339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.158 [2024-07-25 13:56:52.946350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.158 [2024-07-25 13:56:52.946359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.158 [2024-07-25 13:56:52.946369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.158 [2024-07-25 13:56:52.946378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.158 [2024-07-25 13:56:52.946389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.158 [2024-07-25 13:56:52.946398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.158 [2024-07-25 13:56:52.946408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.158 [2024-07-25 13:56:52.946417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.158 [2024-07-25 13:56:52.946427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.158 [2024-07-25 13:56:52.946437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.158 [2024-07-25 13:56:52.946447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.158 [2024-07-25 13:56:52.946456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.158 [2024-07-25 13:56:52.946466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.158 [2024-07-25 13:56:52.946475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.158 [2024-07-25 13:56:52.946487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.158 [2024-07-25 13:56:52.946496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.158 [2024-07-25 13:56:52.946506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.158 [2024-07-25 13:56:52.946515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.158 [2024-07-25 13:56:52.946526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.158 [2024-07-25 13:56:52.946535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.158 [2024-07-25 13:56:52.946545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.158 [2024-07-25 13:56:52.946554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.158 [2024-07-25 13:56:52.946564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.158 [2024-07-25 13:56:52.946574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.946584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.159 [2024-07-25 13:56:52.946593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.946603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.159 [2024-07-25 13:56:52.946612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.946623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.159 [2024-07-25 13:56:52.946632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.946642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.159 [2024-07-25 13:56:52.946651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.946662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.159 [2024-07-25 13:56:52.946671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.946682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.159 [2024-07-25 13:56:52.946691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.946701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.159 [2024-07-25 13:56:52.946710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.946724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.159 [2024-07-25 13:56:52.946735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.946745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.159 [2024-07-25 13:56:52.946754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.946765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.159 [2024-07-25 13:56:52.946774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.946785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.159 [2024-07-25 13:56:52.946794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.946804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.159 [2024-07-25 13:56:52.946813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.946824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.159 [2024-07-25 13:56:52.946833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.946843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.159 [2024-07-25 13:56:52.946852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.946863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.159 [2024-07-25 13:56:52.946872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.946883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.159 [2024-07-25 13:56:52.946892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.946902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.159 [2024-07-25 13:56:52.946911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.946922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.159 [2024-07-25 13:56:52.946931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.946941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.159 [2024-07-25 13:56:52.946955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.946966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.159 [2024-07-25 13:56:52.946975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.946987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.159 [2024-07-25 13:56:52.946996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.947007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.159 [2024-07-25 13:56:52.947016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.947026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.159 [2024-07-25 13:56:52.947035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.947045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.159 [2024-07-25 13:56:52.947054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.947065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.159 [2024-07-25 13:56:52.947074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.947084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.159 [2024-07-25 13:56:52.947093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.947104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.159 [2024-07-25 13:56:52.947113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.159 [2024-07-25 13:56:52.947124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.160 [2024-07-25 13:56:52.947132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.160 [2024-07-25 13:56:52.947143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.160 [2024-07-25 13:56:52.947152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.160 [2024-07-25 13:56:52.947163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.160 [2024-07-25 13:56:52.947172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.160 [2024-07-25 13:56:52.947182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.160 [2024-07-25 13:56:52.947192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.160 [2024-07-25 13:56:52.947202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.160 [2024-07-25 13:56:52.947211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.160 [2024-07-25 13:56:52.947221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.160 [2024-07-25 13:56:52.947232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.160 [2024-07-25 13:56:52.947242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.160 [2024-07-25 13:56:52.947252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.160 [2024-07-25 13:56:52.947262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.160 [2024-07-25 13:56:52.947272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.160 [2024-07-25 13:56:52.947282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.160 [2024-07-25 13:56:52.947291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.160 [2024-07-25 13:56:52.947302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.160 [2024-07-25 13:56:52.947310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.160 [2024-07-25 13:56:52.947321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.160 [2024-07-25 13:56:52.947330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.160 [2024-07-25 13:56:52.947340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.160 [2024-07-25 13:56:52.947349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.160 [2024-07-25 13:56:52.947359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1406cc0 is same with the state(5) to be set 00:28:56.160 [2024-07-25 13:56:52.948329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.160 [2024-07-25 13:56:52.948343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.160 [2024-07-25 13:56:52.948356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.160 [2024-07-25 13:56:52.948365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.160 [2024-07-25 13:56:52.948376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.160 [2024-07-25 13:56:52.948386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.160 [2024-07-25 13:56:52.948396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.160 [2024-07-25 13:56:52.948405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.160 [2024-07-25 13:56:52.948416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.160 [2024-07-25 13:56:52.948425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.160 [2024-07-25 13:56:52.948436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.160 [2024-07-25 13:56:52.948447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.160 [2024-07-25 13:56:52.948458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.160 [2024-07-25 13:56:52.948467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.160 [2024-07-25 13:56:52.948478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.160 [2024-07-25 13:56:52.948487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.160 [2024-07-25 13:56:52.948498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.160 [2024-07-25 13:56:52.948507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.160 [2024-07-25 13:56:52.948517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.160 [2024-07-25 13:56:52.948526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.160 [2024-07-25 13:56:52.948537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.160 [2024-07-25 13:56:52.948545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.160 [2024-07-25 13:56:52.948556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.160 [2024-07-25 13:56:52.948565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.160 [2024-07-25 13:56:52.948576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.160 [2024-07-25 13:56:52.948585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.160 [2024-07-25 13:56:52.948595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.160 [2024-07-25 13:56:52.948605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.160 [2024-07-25 13:56:52.948615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.160 [2024-07-25 13:56:52.948624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.160 [2024-07-25 13:56:52.948635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.161 [2024-07-25 13:56:52.948644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.161 [2024-07-25 13:56:52.948654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.161 [2024-07-25 13:56:52.948664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.161 [2024-07-25 13:56:52.948674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.161 [2024-07-25 13:56:52.948684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.161 [2024-07-25 13:56:52.948695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.161 [2024-07-25 13:56:52.948704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.161 [2024-07-25 13:56:52.948719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.161 [2024-07-25 13:56:52.948728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.161 [2024-07-25 13:56:52.948739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.161 [2024-07-25 13:56:52.948748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.161 [2024-07-25 13:56:52.948758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.161 [2024-07-25 13:56:52.948767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.161 [2024-07-25 13:56:52.948778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.161 [2024-07-25 13:56:52.948787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.161 [2024-07-25 13:56:52.948798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.161 [2024-07-25 13:56:52.948807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.161 [2024-07-25 13:56:52.948818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.161 [2024-07-25 13:56:52.948826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.161 [2024-07-25 13:56:52.948837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.161 [2024-07-25 13:56:52.948846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.161 [2024-07-25 13:56:52.948856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.161 [2024-07-25 13:56:52.948865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.161 [2024-07-25 13:56:52.948876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.161 [2024-07-25 13:56:52.948885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.161 [2024-07-25 13:56:52.948895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.161 [2024-07-25 13:56:52.948904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.161 [2024-07-25 13:56:52.948915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.161 [2024-07-25 13:56:52.948924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.161 [2024-07-25 13:56:52.948934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.161 [2024-07-25 13:56:52.948945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.161 [2024-07-25 13:56:52.948955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.161 [2024-07-25 13:56:52.948964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.161 [2024-07-25 13:56:52.948974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.161 [2024-07-25 13:56:52.948983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.161 [2024-07-25 13:56:52.948994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.161 [2024-07-25 13:56:52.949003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.161 [2024-07-25 13:56:52.949013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.161 [2024-07-25 13:56:52.949022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.161 [2024-07-25 13:56:52.949032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.161 [2024-07-25 13:56:52.949041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.161 [2024-07-25 13:56:52.949052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.161 [2024-07-25 13:56:52.949061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.161 [2024-07-25 13:56:52.949071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.161 [2024-07-25 13:56:52.949081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.161 [2024-07-25 13:56:52.949091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.161 [2024-07-25 13:56:52.949100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.161 [2024-07-25 13:56:52.949111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.161 [2024-07-25 13:56:52.949119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.161 [2024-07-25 13:56:52.949130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.161 [2024-07-25 13:56:52.949139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.161 [2024-07-25 13:56:52.949149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.161 [2024-07-25 13:56:52.949158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.161 [2024-07-25 13:56:52.949169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.949178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.949190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.949199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.949210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.949219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.949229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.949238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.949249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.949258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.949268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.949277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.949288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.949297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.949307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.949316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.949327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.949336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.949346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.949355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.949366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.949375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.949386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.949395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.949405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.949414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.949424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.949435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.949445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.949454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.949465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.949473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.949484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.949495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.949505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.949515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.949525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.949535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.949545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.949554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.949564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.949573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.949584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.949593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.949602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1408190 is same with the state(5) to be set 00:28:56.162 [2024-07-25 13:56:52.950576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.950590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.950603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.950612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.950623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.950633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.950644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.950656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.950666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.950676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.950686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.950695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.950706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.950719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.950730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.950739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.950750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.950760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.950770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.950780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.950791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.950801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.162 [2024-07-25 13:56:52.950811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.162 [2024-07-25 13:56:52.950820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.950830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.950839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.950850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.950859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.950869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.950878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.950889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.950898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.950912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.950922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.950933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.950942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.950952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.950961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.950972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.950981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.950991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.163 [2024-07-25 13:56:52.951587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.163 [2024-07-25 13:56:52.951596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.951607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.951616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.951626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.951636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.951646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.951656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.951668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.951677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.951688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.951697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.951707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.951723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.951734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.951743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.951754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.951763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.951774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.951783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.951794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.951803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.951814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.951823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.951834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.951844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.951855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.951864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.951874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee1430 is same with the state(5) to be set 00:28:56.164 [2024-07-25 13:56:52.952843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.952857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.952871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.952881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.952894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.952903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.952914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.952923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.952934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.952943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.952954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.952963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.952974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.952983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.952994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.953003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.953014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.953024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.953034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.953044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.953055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.953064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.953075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.953084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.953095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.953104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.953115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.953124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.953135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.953145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.953156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.953166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.953176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.164 [2024-07-25 13:56:52.953186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.164 [2024-07-25 13:56:52.953197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.165 [2024-07-25 13:56:52.953967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.165 [2024-07-25 13:56:52.953976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.166 [2024-07-25 13:56:52.953987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.166 [2024-07-25 13:56:52.953996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.166 [2024-07-25 13:56:52.954007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.166 [2024-07-25 13:56:52.954016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.166 [2024-07-25 13:56:52.954026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.166 [2024-07-25 13:56:52.954036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.166 [2024-07-25 13:56:52.954046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.166 [2024-07-25 13:56:52.954055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.166 [2024-07-25 13:56:52.954066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.166 [2024-07-25 13:56:52.954075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.166 [2024-07-25 13:56:52.954085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.166 [2024-07-25 13:56:52.954094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.166 [2024-07-25 13:56:52.954105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.166 [2024-07-25 13:56:52.954114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.166 [2024-07-25 13:56:52.954123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1525960 is same with the state(5) to be set 00:28:56.166 [2024-07-25 13:56:52.955308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.166 [2024-07-25 13:56:52.955328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:56.166 [2024-07-25 13:56:52.955339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:56.166 [2024-07-25 13:56:52.955350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:56.166 [2024-07-25 13:56:52.955425] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:56.166 [2024-07-25 13:56:52.955442] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:56.166 [2024-07-25 13:56:52.955455] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:56.166 [2024-07-25 13:56:52.955522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:56.166 [2024-07-25 13:56:52.955534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:56.166 task offset: 29952 on job bdev=Nvme10n1 fails 00:28:56.166 00:28:56.166 Latency(us) 00:28:56.166 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.166 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.166 Job: Nvme1n1 ended in about 0.93 seconds with error 00:28:56.166 Verification LBA range: start 0x0 length 0x400 00:28:56.166 Nvme1n1 : 0.93 205.40 12.84 68.47 0.00 231574.12 18979.23 208876.34 00:28:56.166 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.166 Job: Nvme2n1 ended in about 0.92 seconds with error 00:28:56.166 Verification LBA range: start 0x0 length 0x400 00:28:56.166 Nvme2n1 : 0.92 277.37 17.34 69.34 0.00 179875.68 17301.50 205520.90 00:28:56.166 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.166 Job: Nvme3n1 ended in about 0.94 seconds with error 00:28:56.166 Verification LBA range: start 0x0 length 0x400 00:28:56.166 Nvme3n1 : 0.94 204.91 12.81 68.30 0.00 224693.25 20656.95 204682.04 00:28:56.166 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.166 Job: Nvme4n1 ended in about 0.94 seconds with error 00:28:56.166 Verification LBA range: start 0x0 length 0x400 00:28:56.166 Nvme4n1 : 0.94 204.42 12.78 68.14 0.00 221519.26 20237.52 206359.76 00:28:56.166 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.166 Job: Nvme5n1 ended in about 0.94 seconds with error 00:28:56.166 Verification LBA range: start 0x0 length 0x400 00:28:56.166 Nvme5n1 : 0.94 203.93 12.75 67.98 0.00 218336.26 18664.65 194615.71 00:28:56.166 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.166 Job: Nvme6n1 ended in about 0.94 seconds with error 00:28:56.166 Verification LBA range: start 0x0 length 0x400 00:28:56.166 Nvme6n1 : 0.94 203.45 12.72 67.82 0.00 215148.95 18559.80 202165.45 00:28:56.166 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.166 Job: Nvme7n1 ended in about 0.92 seconds with error 00:28:56.166 Verification LBA range: start 0x0 length 0x400 00:28:56.166 Nvme7n1 : 0.92 277.04 17.32 69.26 0.00 165129.58 17720.93 205520.90 00:28:56.166 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.166 Job: Nvme8n1 ended in about 0.95 seconds with error 00:28:56.166 Verification LBA range: start 0x0 length 0x400 00:28:56.166 Nvme8n1 : 0.95 202.96 12.69 67.65 0.00 208223.64 17720.93 204682.04 00:28:56.166 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.166 Job: Nvme9n1 ended in about 0.95 seconds with error 00:28:56.166 Verification LBA range: start 0x0 length 0x400 00:28:56.166 Nvme9n1 : 0.95 205.65 12.85 67.49 0.00 202775.77 17825.79 212231.78 00:28:56.166 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.166 Job: Nvme10n1 ended in about 0.92 seconds with error 00:28:56.166 Verification LBA range: start 0x0 length 0x400 00:28:56.166 Nvme10n1 : 0.92 212.63 13.29 69.43 0.00 191859.97 15938.36 223136.97 00:28:56.166 =================================================================================================================== 00:28:56.166 Total : 2197.76 137.36 683.88 0.00 204302.43 15938.36 223136.97 00:28:56.166 [2024-07-25 13:56:52.977514] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:56.166 [2024-07-25 13:56:52.977552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:56.166 [2024-07-25 13:56:52.977990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.166 [2024-07-25 13:56:52.978010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c490 with addr=10.0.0.2, port=4420 00:28:56.166 [2024-07-25 13:56:52.978023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c490 is same with the state(5) to be set 00:28:56.166 [2024-07-25 13:56:52.978324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.166 [2024-07-25 13:56:52.978336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1438880 with addr=10.0.0.2, port=4420 00:28:56.166 [2024-07-25 13:56:52.978345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438880 is same with the state(5) to be set 00:28:56.166 [2024-07-25 13:56:52.978593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.166 [2024-07-25 13:56:52.978605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x142f260 with addr=10.0.0.2, port=4420 00:28:56.166 [2024-07-25 13:56:52.978614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142f260 is same with the state(5) to be set 00:28:56.166 [2024-07-25 13:56:52.978787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.166 [2024-07-25 13:56:52.978799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c55f0 with addr=10.0.0.2, port=4420 00:28:56.166 [2024-07-25 13:56:52.978808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c55f0 is same with the state(5) to be set 00:28:56.166 [2024-07-25 13:56:52.980316] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:56.166 [2024-07-25 13:56:52.980333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:56.166 [2024-07-25 13:56:52.980650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.166 [2024-07-25 13:56:52.980664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf02610 with addr=10.0.0.2, port=4420 00:28:56.166 [2024-07-25 13:56:52.980674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf02610 is same with the state(5) to be set 00:28:56.166 [2024-07-25 13:56:52.980997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.166 [2024-07-25 13:56:52.981010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d6030 with addr=10.0.0.2, port=4420 00:28:56.166 [2024-07-25 13:56:52.981019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6030 is same with the state(5) to be set 00:28:56.166 [2024-07-25 13:56:52.981246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.166 [2024-07-25 13:56:52.981258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x159f2d0 with addr=10.0.0.2, port=4420 00:28:56.166 [2024-07-25 13:56:52.981267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159f2d0 is same with the state(5) to be set 00:28:56.166 [2024-07-25 13:56:52.981282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140c490 (9): Bad file descriptor 00:28:56.166 [2024-07-25 13:56:52.981296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1438880 (9): Bad file descriptor 00:28:56.166 [2024-07-25 13:56:52.981313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x142f260 (9): Bad file descriptor 00:28:56.166 [2024-07-25 13:56:52.981324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c55f0 (9): Bad file descriptor 00:28:56.166 [2024-07-25 13:56:52.981354] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:56.166 [2024-07-25 13:56:52.981372] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:56.166 [2024-07-25 13:56:52.981385] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:56.166 [2024-07-25 13:56:52.981397] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:56.166 [2024-07-25 13:56:52.981410] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:56.167 [2024-07-25 13:56:52.981472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:56.167 [2024-07-25 13:56:52.981814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.167 [2024-07-25 13:56:52.981828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d8050 with addr=10.0.0.2, port=4420 00:28:56.167 [2024-07-25 13:56:52.981838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d8050 is same with the state(5) to be set 00:28:56.167 [2024-07-25 13:56:52.982080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.167 [2024-07-25 13:56:52.982092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a53e0 with addr=10.0.0.2, port=4420 00:28:56.167 [2024-07-25 13:56:52.982102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a53e0 is same with the state(5) to be set 00:28:56.167 [2024-07-25 13:56:52.982113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf02610 (9): Bad file descriptor 00:28:56.167 [2024-07-25 13:56:52.982124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d6030 (9): Bad file descriptor 00:28:56.167 [2024-07-25 13:56:52.982135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159f2d0 (9): Bad file descriptor 00:28:56.167 [2024-07-25 13:56:52.982147] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.167 [2024-07-25 13:56:52.982155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.167 [2024-07-25 13:56:52.982166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.167 [2024-07-25 13:56:52.982179] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:56.167 [2024-07-25 13:56:52.982187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:56.167 [2024-07-25 13:56:52.982196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:56.167 [2024-07-25 13:56:52.982206] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:56.167 [2024-07-25 13:56:52.982214] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:56.167 [2024-07-25 13:56:52.982223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:56.167 [2024-07-25 13:56:52.982233] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:56.167 [2024-07-25 13:56:52.982241] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:56.167 [2024-07-25 13:56:52.982250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:56.167 [2024-07-25 13:56:52.982332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.167 [2024-07-25 13:56:52.982342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.167 [2024-07-25 13:56:52.982350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.167 [2024-07-25 13:56:52.982357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.167 [2024-07-25 13:56:52.982651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.167 [2024-07-25 13:56:52.982663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d7630 with addr=10.0.0.2, port=4420 00:28:56.167 [2024-07-25 13:56:52.982672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7630 is same with the state(5) to be set 00:28:56.167 [2024-07-25 13:56:52.982682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d8050 (9): Bad file descriptor 00:28:56.167 [2024-07-25 13:56:52.982693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a53e0 (9): Bad file descriptor 00:28:56.167 [2024-07-25 13:56:52.982703] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:56.167 [2024-07-25 13:56:52.982711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:56.167 [2024-07-25 13:56:52.982723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:56.167 [2024-07-25 13:56:52.982733] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:56.167 [2024-07-25 13:56:52.982742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:56.167 [2024-07-25 13:56:52.982750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:56.167 [2024-07-25 13:56:52.982760] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:56.167 [2024-07-25 13:56:52.982768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:56.167 [2024-07-25 13:56:52.982777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:56.167 [2024-07-25 13:56:52.982804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.167 [2024-07-25 13:56:52.982812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.167 [2024-07-25 13:56:52.982820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.167 [2024-07-25 13:56:52.982828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d7630 (9): Bad file descriptor 00:28:56.167 [2024-07-25 13:56:52.982838] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:56.167 [2024-07-25 13:56:52.982846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:56.167 [2024-07-25 13:56:52.982855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:56.167 [2024-07-25 13:56:52.982864] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:56.167 [2024-07-25 13:56:52.982873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:56.167 [2024-07-25 13:56:52.982881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:56.167 [2024-07-25 13:56:52.982905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.167 [2024-07-25 13:56:52.982914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.167 [2024-07-25 13:56:52.982921] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:56.167 [2024-07-25 13:56:52.982932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:56.167 [2024-07-25 13:56:52.982941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:56.167 [2024-07-25 13:56:52.982964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.427 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:28:56.427 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:28:57.810 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 402199 00:28:57.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (402199) - No such process 00:28:57.810 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:28:57.810 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:28:57.810 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:57.810 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:57.810 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:57.810 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:57.810 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:57.810 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:28:57.810 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:57.810 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:28:57.810 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:57.810 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:57.810 rmmod nvme_tcp 00:28:57.810 rmmod nvme_fabrics 00:28:57.810 rmmod nvme_keyring 00:28:57.810 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:57.810 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:28:57.810 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:28:57.810 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:57.810 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:57.810 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:57.810 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:57.810 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:57.810 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:57.810 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.810 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:57.810 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.718 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:59.718 00:28:59.718 real 0m7.179s 00:28:59.718 user 0m16.042s 00:28:59.718 sys 0m1.565s 00:28:59.718 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:59.718 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:59.718 ************************************ 00:28:59.718 END TEST nvmf_shutdown_tc3 00:28:59.718 ************************************ 00:28:59.718 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:28:59.718 00:28:59.718 real 0m31.592s 00:28:59.718 user 1m14.280s 00:28:59.718 sys 0m10.206s 00:28:59.718 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:59.718 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:59.718 ************************************ 00:28:59.718 END TEST nvmf_shutdown 00:28:59.718 ************************************ 00:28:59.718 13:56:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:28:59.718 00:28:59.718 real 17m44.866s 00:28:59.718 user 46m59.747s 00:28:59.718 sys 5m25.269s 00:28:59.718 13:56:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:59.718 13:56:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:59.718 ************************************ 00:28:59.718 END TEST nvmf_target_extra 00:28:59.718 ************************************ 00:28:59.977 13:56:56 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:59.977 13:56:56 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:59.977 13:56:56 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:59.978 13:56:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:59.978 ************************************ 00:28:59.978 START TEST nvmf_host 00:28:59.978 ************************************ 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:59.978 * Looking for test storage... 00:28:59.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.978 ************************************ 00:28:59.978 START TEST nvmf_multicontroller 00:28:59.978 ************************************ 00:28:59.978 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:00.238 * Looking for test storage... 00:29:00.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:29:00.239 13:56:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:06.816 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:06.816 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:06.816 Found net devices under 0000:af:00.0: cvl_0_0 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:06.816 Found net devices under 0000:af:00.1: cvl_0_1 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:06.816 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:07.078 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:07.078 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:07.078 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:07.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:07.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:29:07.078 00:29:07.078 --- 10.0.0.2 ping statistics --- 00:29:07.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.078 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:29:07.078 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:07.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:07.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:29:07.078 00:29:07.078 --- 10.0.0.1 ping statistics --- 00:29:07.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.078 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:29:07.078 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:07.078 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:29:07.078 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:07.078 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:07.078 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:07.078 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:07.078 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:07.078 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:07.078 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:07.078 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:07.078 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:07.078 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:07.078 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:07.078 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=406737 00:29:07.078 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:07.078 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 406737 00:29:07.078 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 406737 ']' 00:29:07.078 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.078 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:07.078 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.078 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:07.078 13:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:07.078 [2024-07-25 13:57:03.888125] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:29:07.078 [2024-07-25 13:57:03.888178] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.078 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.078 [2024-07-25 13:57:03.928559] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:07.078 [2024-07-25 13:57:03.964380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:07.357 [2024-07-25 13:57:04.002752] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.357 [2024-07-25 13:57:04.002797] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.357 [2024-07-25 13:57:04.002806] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.357 [2024-07-25 13:57:04.002815] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.357 [2024-07-25 13:57:04.002839] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.357 [2024-07-25 13:57:04.002955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:07.357 [2024-07-25 13:57:04.003043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:07.357 [2024-07-25 13:57:04.003044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.925 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:07.925 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:29:07.925 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:07.925 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:07.925 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:07.925 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:07.925 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:07.925 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.925 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:07.925 [2024-07-25 13:57:04.753261] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.925 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.925 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:07.925 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.925 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:07.925 Malloc0 00:29:07.925 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.925 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:07.925 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.925 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:07.925 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.925 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:07.925 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.925 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.184 [2024-07-25 13:57:04.820419] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.184 [2024-07-25 13:57:04.828346] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.184 Malloc1 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=407222 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 407222 /var/tmp/bdevperf.sock 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 407222 ']' 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:08.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:08.184 13:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:09.122 13:57:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:09.122 13:57:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:29:09.122 13:57:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:29:09.122 13:57:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.122 13:57:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:09.122 NVMe0n1 00:29:09.122 13:57:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.122 13:57:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:09.122 13:57:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:09.122 13:57:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.122 13:57:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:09.122 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.122 1 00:29:09.122 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:09.122 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:09.122 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:09.122 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:09.383 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:09.383 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:09.383 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:09.383 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:09.383 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.383 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:09.383 request: 00:29:09.383 { 00:29:09.383 "name": "NVMe0", 00:29:09.383 "trtype": "tcp", 00:29:09.383 "traddr": "10.0.0.2", 00:29:09.383 "adrfam": "ipv4", 00:29:09.383 "trsvcid": "4420", 00:29:09.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:09.383 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:09.383 "hostaddr": "10.0.0.2", 00:29:09.383 "hostsvcid": "60000", 00:29:09.383 "prchk_reftag": false, 00:29:09.383 "prchk_guard": false, 00:29:09.383 "hdgst": false, 00:29:09.383 "ddgst": false, 00:29:09.383 "method": "bdev_nvme_attach_controller", 00:29:09.383 "req_id": 1 00:29:09.383 } 00:29:09.383 Got JSON-RPC error response 00:29:09.383 response: 00:29:09.383 { 00:29:09.383 "code": -114, 00:29:09.383 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:09.383 } 00:29:09.383 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:09.383 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:09.383 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:09.383 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:09.383 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:09.383 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:09.383 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:09.383 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:09.383 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:09.383 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:09.383 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:09.383 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:09.383 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:09.383 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.383 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:09.383 request: 00:29:09.383 { 00:29:09.383 "name": "NVMe0", 00:29:09.383 "trtype": "tcp", 00:29:09.383 "traddr": "10.0.0.2", 00:29:09.383 "adrfam": "ipv4", 00:29:09.383 "trsvcid": "4420", 00:29:09.383 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:09.383 "hostaddr": "10.0.0.2", 00:29:09.383 "hostsvcid": "60000", 00:29:09.383 "prchk_reftag": false, 00:29:09.383 "prchk_guard": false, 00:29:09.383 "hdgst": false, 00:29:09.383 "ddgst": false, 00:29:09.383 "method": "bdev_nvme_attach_controller", 00:29:09.383 "req_id": 1 00:29:09.383 } 00:29:09.383 Got JSON-RPC error response 00:29:09.383 response: 00:29:09.383 { 00:29:09.383 "code": -114, 00:29:09.383 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:09.383 } 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:09.384 request: 00:29:09.384 { 00:29:09.384 "name": "NVMe0", 00:29:09.384 "trtype": "tcp", 00:29:09.384 "traddr": "10.0.0.2", 00:29:09.384 "adrfam": "ipv4", 00:29:09.384 "trsvcid": "4420", 00:29:09.384 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:09.384 "hostaddr": "10.0.0.2", 00:29:09.384 "hostsvcid": "60000", 00:29:09.384 "prchk_reftag": false, 00:29:09.384 "prchk_guard": false, 00:29:09.384 "hdgst": false, 00:29:09.384 "ddgst": false, 00:29:09.384 "multipath": "disable", 00:29:09.384 "method": "bdev_nvme_attach_controller", 00:29:09.384 "req_id": 1 00:29:09.384 } 00:29:09.384 Got JSON-RPC error response 00:29:09.384 response: 00:29:09.384 { 00:29:09.384 "code": -114, 00:29:09.384 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:29:09.384 } 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:09.384 request: 00:29:09.384 { 00:29:09.384 "name": "NVMe0", 00:29:09.384 "trtype": "tcp", 00:29:09.384 "traddr": "10.0.0.2", 00:29:09.384 "adrfam": "ipv4", 00:29:09.384 "trsvcid": "4420", 00:29:09.384 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:09.384 "hostaddr": "10.0.0.2", 00:29:09.384 "hostsvcid": "60000", 00:29:09.384 "prchk_reftag": false, 00:29:09.384 "prchk_guard": false, 00:29:09.384 "hdgst": false, 00:29:09.384 "ddgst": false, 00:29:09.384 "multipath": "failover", 00:29:09.384 "method": "bdev_nvme_attach_controller", 00:29:09.384 "req_id": 1 00:29:09.384 } 00:29:09.384 Got JSON-RPC error response 00:29:09.384 response: 00:29:09.384 { 00:29:09.384 "code": -114, 00:29:09.384 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:09.384 } 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.384 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:09.644 00:29:09.644 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.644 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:09.644 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.644 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:09.644 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.644 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:29:09.644 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.644 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:09.644 00:29:09.644 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.903 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:09.903 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:09.903 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.903 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:09.903 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.903 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:09.903 13:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:10.841 0 00:29:10.841 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:10.841 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.841 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:10.841 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.841 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 407222 00:29:10.841 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 407222 ']' 00:29:10.841 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 407222 00:29:10.841 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:29:10.841 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:10.841 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 407222 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 407222' 00:29:11.100 killing process with pid 407222 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 407222 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 407222 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:29:11.100 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:11.100 [2024-07-25 13:57:04.935157] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:29:11.100 [2024-07-25 13:57:04.935217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407222 ] 00:29:11.100 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.100 [2024-07-25 13:57:04.971703] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:11.100 [2024-07-25 13:57:05.008398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.100 [2024-07-25 13:57:05.047443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.100 [2024-07-25 13:57:06.526533] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name a7a45816-2701-471c-93ea-a7ddbd22c7ba already exists 00:29:11.100 [2024-07-25 13:57:06.526566] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:a7a45816-2701-471c-93ea-a7ddbd22c7ba alias for bdev NVMe1n1 00:29:11.100 [2024-07-25 13:57:06.526578] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:11.100 Running I/O for 1 seconds... 00:29:11.100 00:29:11.100 Latency(us) 00:29:11.100 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.100 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:11.100 NVMe0n1 : 1.00 24370.88 95.20 0.00 0.00 5235.86 4430.23 14470.35 00:29:11.100 =================================================================================================================== 00:29:11.100 Total : 24370.88 95.20 0.00 0.00 5235.86 4430.23 14470.35 00:29:11.100 Received shutdown signal, test time was about 1.000000 seconds 00:29:11.100 00:29:11.100 Latency(us) 00:29:11.100 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.100 =================================================================================================================== 00:29:11.100 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:11.100 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:11.100 13:57:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:11.100 rmmod nvme_tcp 00:29:11.100 rmmod nvme_fabrics 00:29:11.359 rmmod nvme_keyring 00:29:11.359 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:11.359 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:29:11.359 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:29:11.359 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 406737 ']' 00:29:11.360 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 406737 00:29:11.360 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 406737 ']' 00:29:11.360 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 406737 00:29:11.360 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:29:11.360 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:11.360 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 406737 00:29:11.360 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:11.360 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:11.360 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 406737' 00:29:11.360 killing process with pid 406737 00:29:11.360 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 406737 00:29:11.360 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 406737 00:29:11.620 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:11.620 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:11.620 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:11.620 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:11.620 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:11.620 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.620 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.620 13:57:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:13.528 13:57:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:13.528 00:29:13.528 real 0m13.610s 00:29:13.528 user 0m17.771s 00:29:13.528 sys 0m6.350s 00:29:13.528 13:57:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:13.528 13:57:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.528 ************************************ 00:29:13.528 END TEST nvmf_multicontroller 00:29:13.528 ************************************ 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.787 ************************************ 00:29:13.787 START TEST nvmf_aer 00:29:13.787 ************************************ 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:13.787 * Looking for test storage... 00:29:13.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:29:13.787 13:57:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:20.355 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:20.355 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:29:20.355 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:20.355 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:20.356 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:20.356 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:20.356 Found net devices under 0000:af:00.0: cvl_0_0 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:20.356 Found net devices under 0000:af:00.1: cvl_0_1 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:20.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:20.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:29:20.356 00:29:20.356 --- 10.0.0.2 ping statistics --- 00:29:20.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:20.356 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:20.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:20.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:29:20.356 00:29:20.356 --- 10.0.0.1 ping statistics --- 00:29:20.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:20.356 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:20.356 13:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:20.356 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:20.356 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:20.356 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:20.356 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:20.356 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=411297 00:29:20.356 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:20.356 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 411297 00:29:20.356 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 411297 ']' 00:29:20.356 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.356 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:20.357 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:20.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:20.357 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:20.357 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:20.357 [2024-07-25 13:57:17.067704] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:29:20.357 [2024-07-25 13:57:17.067757] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:20.357 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.357 [2024-07-25 13:57:17.110701] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:20.357 [2024-07-25 13:57:17.141674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:20.357 [2024-07-25 13:57:17.183386] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:20.357 [2024-07-25 13:57:17.183422] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:20.357 [2024-07-25 13:57:17.183432] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:20.357 [2024-07-25 13:57:17.183441] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:20.357 [2024-07-25 13:57:17.183450] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:20.357 [2024-07-25 13:57:17.183515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.357 [2024-07-25 13:57:17.184300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:20.357 [2024-07-25 13:57:17.184330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.357 [2024-07-25 13:57:17.184328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:21.295 [2024-07-25 13:57:17.927230] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:21.295 Malloc0 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:21.295 [2024-07-25 13:57:17.982031] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.295 13:57:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:21.295 [ 00:29:21.295 { 00:29:21.295 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:21.295 "subtype": "Discovery", 00:29:21.295 "listen_addresses": [], 00:29:21.295 "allow_any_host": true, 00:29:21.295 "hosts": [] 00:29:21.295 }, 00:29:21.295 { 00:29:21.295 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:21.295 "subtype": "NVMe", 00:29:21.295 "listen_addresses": [ 00:29:21.295 { 00:29:21.295 "trtype": "TCP", 00:29:21.295 "adrfam": "IPv4", 00:29:21.295 "traddr": "10.0.0.2", 00:29:21.295 "trsvcid": "4420" 00:29:21.295 } 00:29:21.295 ], 00:29:21.295 "allow_any_host": true, 00:29:21.295 "hosts": [], 00:29:21.295 "serial_number": "SPDK00000000000001", 00:29:21.295 "model_number": "SPDK bdev Controller", 00:29:21.295 "max_namespaces": 2, 00:29:21.295 "min_cntlid": 1, 00:29:21.295 "max_cntlid": 65519, 00:29:21.295 "namespaces": [ 00:29:21.295 { 00:29:21.295 "nsid": 1, 00:29:21.295 "bdev_name": "Malloc0", 00:29:21.295 "name": "Malloc0", 00:29:21.295 "nguid": "B3E182A7F6BC4A5EB34FF954DE66A162", 00:29:21.295 "uuid": "b3e182a7-f6bc-4a5e-b34f-f954de66a162" 00:29:21.295 } 00:29:21.295 ] 00:29:21.295 } 00:29:21.295 ] 00:29:21.295 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.295 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:21.295 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:21.295 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=411576 00:29:21.295 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:21.295 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:29:21.295 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:21.295 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:21.295 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:29:21.295 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:29:21.295 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:21.295 EAL: No free 2048 kB hugepages reported on node 1 00:29:21.295 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:21.295 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:29:21.295 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:29:21.295 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:21.555 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:21.555 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:29:21.555 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:29:21.555 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:21.555 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:21.555 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:21.555 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:29:21.555 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:21.555 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.555 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:21.555 Malloc1 00:29:21.555 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.555 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:21.555 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.555 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:21.555 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.555 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:21.555 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.555 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:21.555 Asynchronous Event Request test 00:29:21.555 Attaching to 10.0.0.2 00:29:21.555 Attached to 10.0.0.2 00:29:21.555 Registering asynchronous event callbacks... 00:29:21.555 Starting namespace attribute notice tests for all controllers... 00:29:21.555 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:21.555 aer_cb - Changed Namespace 00:29:21.555 Cleaning up... 00:29:21.555 [ 00:29:21.555 { 00:29:21.555 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:21.555 "subtype": "Discovery", 00:29:21.555 "listen_addresses": [], 00:29:21.555 "allow_any_host": true, 00:29:21.555 "hosts": [] 00:29:21.555 }, 00:29:21.555 { 00:29:21.555 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:21.555 "subtype": "NVMe", 00:29:21.555 "listen_addresses": [ 00:29:21.555 { 00:29:21.555 "trtype": "TCP", 00:29:21.555 "adrfam": "IPv4", 00:29:21.555 "traddr": "10.0.0.2", 00:29:21.555 "trsvcid": "4420" 00:29:21.555 } 00:29:21.555 ], 00:29:21.555 "allow_any_host": true, 00:29:21.555 "hosts": [], 00:29:21.555 "serial_number": "SPDK00000000000001", 00:29:21.555 "model_number": "SPDK bdev Controller", 00:29:21.555 "max_namespaces": 2, 00:29:21.555 "min_cntlid": 1, 00:29:21.555 "max_cntlid": 65519, 00:29:21.555 "namespaces": [ 00:29:21.555 { 00:29:21.555 "nsid": 1, 00:29:21.555 "bdev_name": "Malloc0", 00:29:21.555 "name": "Malloc0", 00:29:21.555 "nguid": "B3E182A7F6BC4A5EB34FF954DE66A162", 00:29:21.555 "uuid": "b3e182a7-f6bc-4a5e-b34f-f954de66a162" 00:29:21.555 }, 00:29:21.555 { 00:29:21.555 "nsid": 2, 00:29:21.555 "bdev_name": "Malloc1", 00:29:21.555 "name": "Malloc1", 00:29:21.555 "nguid": "6E9C2132967E4D109DFE60EFA667C545", 00:29:21.555 "uuid": "6e9c2132-967e-4d10-9dfe-60efa667c545" 00:29:21.555 } 00:29:21.555 ] 00:29:21.555 } 00:29:21.555 ] 00:29:21.555 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.555 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 411576 00:29:21.555 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:21.555 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.555 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:21.555 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.555 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:21.555 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.555 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:21.814 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.814 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:21.814 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.814 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:21.814 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.814 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:21.814 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:21.814 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:21.814 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:29:21.814 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:21.814 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:29:21.814 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:21.814 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:21.814 rmmod nvme_tcp 00:29:21.814 rmmod nvme_fabrics 00:29:21.814 rmmod nvme_keyring 00:29:21.814 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:21.814 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:29:21.814 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:29:21.814 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 411297 ']' 00:29:21.814 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 411297 00:29:21.814 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 411297 ']' 00:29:21.814 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 411297 00:29:21.814 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:29:21.814 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:21.814 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 411297 00:29:21.814 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:21.814 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:21.814 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 411297' 00:29:21.815 killing process with pid 411297 00:29:21.815 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 411297 00:29:21.815 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 411297 00:29:22.074 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:22.074 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:22.074 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:22.074 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:22.074 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:22.074 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.074 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:22.074 13:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.980 13:57:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:23.980 00:29:23.980 real 0m10.372s 00:29:23.980 user 0m7.910s 00:29:23.980 sys 0m5.362s 00:29:23.980 13:57:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:23.980 13:57:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:23.980 ************************************ 00:29:23.980 END TEST nvmf_aer 00:29:23.980 ************************************ 00:29:23.980 13:57:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:23.980 13:57:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:23.980 13:57:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:23.980 13:57:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.240 ************************************ 00:29:24.240 START TEST nvmf_async_init 00:29:24.240 ************************************ 00:29:24.240 13:57:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:24.240 * Looking for test storage... 00:29:24.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=9e59170964bc4aefa1f2fcc12ff4ef09 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:24.240 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.241 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:24.241 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:24.241 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:24.241 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.241 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:24.241 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.241 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:24.241 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:24.241 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:29:24.241 13:57:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:30.844 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:30.844 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:30.844 Found net devices under 0000:af:00.0: cvl_0_0 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:30.844 Found net devices under 0000:af:00.1: cvl_0_1 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:30.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:30.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:29:30.844 00:29:30.844 --- 10.0.0.2 ping statistics --- 00:29:30.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.844 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:30.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:30.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:29:30.844 00:29:30.844 --- 10.0.0.1 ping statistics --- 00:29:30.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.844 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:30.844 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:29:30.845 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:30.845 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:30.845 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:30.845 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:30.845 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:30.845 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:30.845 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:30.845 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:30.845 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:30.845 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:30.845 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:30.845 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=415255 00:29:30.845 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 415255 00:29:30.845 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 415255 ']' 00:29:30.845 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.845 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:30.845 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.845 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:30.845 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:30.845 13:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:30.845 [2024-07-25 13:57:27.572476] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:29:30.845 [2024-07-25 13:57:27.572526] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.845 EAL: No free 2048 kB hugepages reported on node 1 00:29:30.845 [2024-07-25 13:57:27.612306] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:30.845 [2024-07-25 13:57:27.645603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.845 [2024-07-25 13:57:27.683327] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.845 [2024-07-25 13:57:27.683369] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.845 [2024-07-25 13:57:27.683378] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.845 [2024-07-25 13:57:27.683386] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.845 [2024-07-25 13:57:27.683393] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.845 [2024-07-25 13:57:27.683415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:31.783 [2024-07-25 13:57:28.412341] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:31.783 null0 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 9e59170964bc4aefa1f2fcc12ff4ef09 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:31.783 [2024-07-25 13:57:28.456560] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.783 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:32.042 nvme0n1 00:29:32.042 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.042 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:32.042 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.042 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:32.042 [ 00:29:32.042 { 00:29:32.042 "name": "nvme0n1", 00:29:32.042 "aliases": [ 00:29:32.042 "9e591709-64bc-4aef-a1f2-fcc12ff4ef09" 00:29:32.042 ], 00:29:32.042 "product_name": "NVMe disk", 00:29:32.042 "block_size": 512, 00:29:32.042 "num_blocks": 2097152, 00:29:32.042 "uuid": "9e591709-64bc-4aef-a1f2-fcc12ff4ef09", 00:29:32.042 "assigned_rate_limits": { 00:29:32.042 "rw_ios_per_sec": 0, 00:29:32.042 "rw_mbytes_per_sec": 0, 00:29:32.042 "r_mbytes_per_sec": 0, 00:29:32.042 "w_mbytes_per_sec": 0 00:29:32.042 }, 00:29:32.042 "claimed": false, 00:29:32.042 "zoned": false, 00:29:32.042 "supported_io_types": { 00:29:32.042 "read": true, 00:29:32.042 "write": true, 00:29:32.042 "unmap": false, 00:29:32.042 "flush": true, 00:29:32.042 "reset": true, 00:29:32.042 "nvme_admin": true, 00:29:32.042 "nvme_io": true, 00:29:32.042 "nvme_io_md": false, 00:29:32.042 "write_zeroes": true, 00:29:32.042 "zcopy": false, 00:29:32.042 "get_zone_info": false, 00:29:32.042 "zone_management": false, 00:29:32.042 "zone_append": false, 00:29:32.042 "compare": true, 00:29:32.042 "compare_and_write": true, 00:29:32.042 "abort": true, 00:29:32.042 "seek_hole": false, 00:29:32.042 "seek_data": false, 00:29:32.042 "copy": true, 00:29:32.042 "nvme_iov_md": false 00:29:32.042 }, 00:29:32.042 "memory_domains": [ 00:29:32.042 { 00:29:32.042 "dma_device_id": "system", 00:29:32.042 "dma_device_type": 1 00:29:32.042 } 00:29:32.042 ], 00:29:32.042 "driver_specific": { 00:29:32.042 "nvme": [ 00:29:32.042 { 00:29:32.042 "trid": { 00:29:32.042 "trtype": "TCP", 00:29:32.042 "adrfam": "IPv4", 00:29:32.042 "traddr": "10.0.0.2", 00:29:32.042 "trsvcid": "4420", 00:29:32.042 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:32.042 }, 00:29:32.042 "ctrlr_data": { 00:29:32.042 "cntlid": 1, 00:29:32.042 "vendor_id": "0x8086", 00:29:32.042 "model_number": "SPDK bdev Controller", 00:29:32.042 "serial_number": "00000000000000000000", 00:29:32.042 "firmware_revision": "24.09", 00:29:32.042 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:32.042 "oacs": { 00:29:32.042 "security": 0, 00:29:32.042 "format": 0, 00:29:32.042 "firmware": 0, 00:29:32.042 "ns_manage": 0 00:29:32.042 }, 00:29:32.042 "multi_ctrlr": true, 00:29:32.042 "ana_reporting": false 00:29:32.042 }, 00:29:32.042 "vs": { 00:29:32.042 "nvme_version": "1.3" 00:29:32.042 }, 00:29:32.042 "ns_data": { 00:29:32.042 "id": 1, 00:29:32.042 "can_share": true 00:29:32.042 } 00:29:32.042 } 00:29:32.042 ], 00:29:32.042 "mp_policy": "active_passive" 00:29:32.042 } 00:29:32.042 } 00:29:32.042 ] 00:29:32.042 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.042 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:32.042 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.042 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:32.042 [2024-07-25 13:57:28.730119] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:32.042 [2024-07-25 13:57:28.730177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x261d980 (9): Bad file descriptor 00:29:32.042 [2024-07-25 13:57:28.861799] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:32.042 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.042 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:32.042 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.043 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:32.043 [ 00:29:32.043 { 00:29:32.043 "name": "nvme0n1", 00:29:32.043 "aliases": [ 00:29:32.043 "9e591709-64bc-4aef-a1f2-fcc12ff4ef09" 00:29:32.043 ], 00:29:32.043 "product_name": "NVMe disk", 00:29:32.043 "block_size": 512, 00:29:32.043 "num_blocks": 2097152, 00:29:32.043 "uuid": "9e591709-64bc-4aef-a1f2-fcc12ff4ef09", 00:29:32.043 "assigned_rate_limits": { 00:29:32.043 "rw_ios_per_sec": 0, 00:29:32.043 "rw_mbytes_per_sec": 0, 00:29:32.043 "r_mbytes_per_sec": 0, 00:29:32.043 "w_mbytes_per_sec": 0 00:29:32.043 }, 00:29:32.043 "claimed": false, 00:29:32.043 "zoned": false, 00:29:32.043 "supported_io_types": { 00:29:32.043 "read": true, 00:29:32.043 "write": true, 00:29:32.043 "unmap": false, 00:29:32.043 "flush": true, 00:29:32.043 "reset": true, 00:29:32.043 "nvme_admin": true, 00:29:32.043 "nvme_io": true, 00:29:32.043 "nvme_io_md": false, 00:29:32.043 "write_zeroes": true, 00:29:32.043 "zcopy": false, 00:29:32.043 "get_zone_info": false, 00:29:32.043 "zone_management": false, 00:29:32.043 "zone_append": false, 00:29:32.043 "compare": true, 00:29:32.043 "compare_and_write": true, 00:29:32.043 "abort": true, 00:29:32.043 "seek_hole": false, 00:29:32.043 "seek_data": false, 00:29:32.043 "copy": true, 00:29:32.043 "nvme_iov_md": false 00:29:32.043 }, 00:29:32.043 "memory_domains": [ 00:29:32.043 { 00:29:32.043 "dma_device_id": "system", 00:29:32.043 "dma_device_type": 1 00:29:32.043 } 00:29:32.043 ], 00:29:32.043 "driver_specific": { 00:29:32.043 "nvme": [ 00:29:32.043 { 00:29:32.043 "trid": { 00:29:32.043 "trtype": "TCP", 00:29:32.043 "adrfam": "IPv4", 00:29:32.043 "traddr": "10.0.0.2", 00:29:32.043 "trsvcid": "4420", 00:29:32.043 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:32.043 }, 00:29:32.043 "ctrlr_data": { 00:29:32.043 "cntlid": 2, 00:29:32.043 "vendor_id": "0x8086", 00:29:32.043 "model_number": "SPDK bdev Controller", 00:29:32.043 "serial_number": "00000000000000000000", 00:29:32.043 "firmware_revision": "24.09", 00:29:32.043 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:32.043 "oacs": { 00:29:32.043 "security": 0, 00:29:32.043 "format": 0, 00:29:32.043 "firmware": 0, 00:29:32.043 "ns_manage": 0 00:29:32.043 }, 00:29:32.043 "multi_ctrlr": true, 00:29:32.043 "ana_reporting": false 00:29:32.043 }, 00:29:32.043 "vs": { 00:29:32.043 "nvme_version": "1.3" 00:29:32.043 }, 00:29:32.043 "ns_data": { 00:29:32.043 "id": 1, 00:29:32.043 "can_share": true 00:29:32.043 } 00:29:32.043 } 00:29:32.043 ], 00:29:32.043 "mp_policy": "active_passive" 00:29:32.043 } 00:29:32.043 } 00:29:32.043 ] 00:29:32.043 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.043 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.043 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.043 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:32.043 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.043 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:32.043 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.zXNmLMfyLj 00:29:32.043 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:32.043 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.zXNmLMfyLj 00:29:32.043 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:32.043 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.043 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:32.302 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.302 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:32.302 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.302 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:32.302 [2024-07-25 13:57:28.934762] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:32.302 [2024-07-25 13:57:28.934889] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:32.302 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.302 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zXNmLMfyLj 00:29:32.302 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.302 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:32.302 [2024-07-25 13:57:28.942774] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:32.302 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.302 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zXNmLMfyLj 00:29:32.302 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.302 13:57:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:32.302 [2024-07-25 13:57:28.954814] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:32.302 [2024-07-25 13:57:28.954853] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:29:32.302 nvme0n1 00:29:32.302 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.302 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:32.302 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.302 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:32.302 [ 00:29:32.302 { 00:29:32.302 "name": "nvme0n1", 00:29:32.302 "aliases": [ 00:29:32.302 "9e591709-64bc-4aef-a1f2-fcc12ff4ef09" 00:29:32.302 ], 00:29:32.302 "product_name": "NVMe disk", 00:29:32.302 "block_size": 512, 00:29:32.302 "num_blocks": 2097152, 00:29:32.302 "uuid": "9e591709-64bc-4aef-a1f2-fcc12ff4ef09", 00:29:32.302 "assigned_rate_limits": { 00:29:32.302 "rw_ios_per_sec": 0, 00:29:32.302 "rw_mbytes_per_sec": 0, 00:29:32.302 "r_mbytes_per_sec": 0, 00:29:32.302 "w_mbytes_per_sec": 0 00:29:32.302 }, 00:29:32.302 "claimed": false, 00:29:32.302 "zoned": false, 00:29:32.302 "supported_io_types": { 00:29:32.302 "read": true, 00:29:32.302 "write": true, 00:29:32.302 "unmap": false, 00:29:32.302 "flush": true, 00:29:32.302 "reset": true, 00:29:32.302 "nvme_admin": true, 00:29:32.302 "nvme_io": true, 00:29:32.302 "nvme_io_md": false, 00:29:32.302 "write_zeroes": true, 00:29:32.302 "zcopy": false, 00:29:32.302 "get_zone_info": false, 00:29:32.302 "zone_management": false, 00:29:32.302 "zone_append": false, 00:29:32.302 "compare": true, 00:29:32.302 "compare_and_write": true, 00:29:32.302 "abort": true, 00:29:32.302 "seek_hole": false, 00:29:32.302 "seek_data": false, 00:29:32.302 "copy": true, 00:29:32.302 "nvme_iov_md": false 00:29:32.302 }, 00:29:32.302 "memory_domains": [ 00:29:32.302 { 00:29:32.302 "dma_device_id": "system", 00:29:32.302 "dma_device_type": 1 00:29:32.302 } 00:29:32.302 ], 00:29:32.302 "driver_specific": { 00:29:32.302 "nvme": [ 00:29:32.302 { 00:29:32.302 "trid": { 00:29:32.302 "trtype": "TCP", 00:29:32.302 "adrfam": "IPv4", 00:29:32.302 "traddr": "10.0.0.2", 00:29:32.302 "trsvcid": "4421", 00:29:32.302 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:32.302 }, 00:29:32.302 "ctrlr_data": { 00:29:32.302 "cntlid": 3, 00:29:32.302 "vendor_id": "0x8086", 00:29:32.302 "model_number": "SPDK bdev Controller", 00:29:32.302 "serial_number": "00000000000000000000", 00:29:32.302 "firmware_revision": "24.09", 00:29:32.302 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:32.302 "oacs": { 00:29:32.302 "security": 0, 00:29:32.302 "format": 0, 00:29:32.302 "firmware": 0, 00:29:32.302 "ns_manage": 0 00:29:32.302 }, 00:29:32.302 "multi_ctrlr": true, 00:29:32.302 "ana_reporting": false 00:29:32.302 }, 00:29:32.302 "vs": { 00:29:32.302 "nvme_version": "1.3" 00:29:32.302 }, 00:29:32.302 "ns_data": { 00:29:32.302 "id": 1, 00:29:32.302 "can_share": true 00:29:32.302 } 00:29:32.302 } 00:29:32.302 ], 00:29:32.302 "mp_policy": "active_passive" 00:29:32.302 } 00:29:32.302 } 00:29:32.302 ] 00:29:32.302 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.302 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.303 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.303 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:32.303 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.303 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.zXNmLMfyLj 00:29:32.303 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:29:32.303 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:29:32.303 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:32.303 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:29:32.303 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:32.303 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:29:32.303 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:32.303 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:32.303 rmmod nvme_tcp 00:29:32.303 rmmod nvme_fabrics 00:29:32.303 rmmod nvme_keyring 00:29:32.303 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:32.303 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:29:32.303 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:29:32.303 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 415255 ']' 00:29:32.303 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 415255 00:29:32.303 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 415255 ']' 00:29:32.303 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 415255 00:29:32.303 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:29:32.303 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:32.303 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 415255 00:29:32.562 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:32.562 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:32.562 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 415255' 00:29:32.562 killing process with pid 415255 00:29:32.562 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 415255 00:29:32.562 [2024-07-25 13:57:29.192757] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:29:32.562 [2024-07-25 13:57:29.192782] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:32.562 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 415255 00:29:32.562 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:32.562 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:32.562 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:32.562 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:32.562 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:32.562 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.562 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.562 13:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:35.106 00:29:35.106 real 0m10.526s 00:29:35.106 user 0m3.790s 00:29:35.106 sys 0m5.355s 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:35.106 ************************************ 00:29:35.106 END TEST nvmf_async_init 00:29:35.106 ************************************ 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.106 ************************************ 00:29:35.106 START TEST dma 00:29:35.106 ************************************ 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:35.106 * Looking for test storage... 00:29:35.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:35.106 00:29:35.106 real 0m0.116s 00:29:35.106 user 0m0.044s 00:29:35.106 sys 0m0.082s 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:35.106 ************************************ 00:29:35.106 END TEST dma 00:29:35.106 ************************************ 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.106 ************************************ 00:29:35.106 START TEST nvmf_identify 00:29:35.106 ************************************ 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:35.106 * Looking for test storage... 00:29:35.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:35.106 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:29:35.107 13:57:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:41.690 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:41.690 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:41.690 Found net devices under 0000:af:00.0: cvl_0_0 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:41.690 Found net devices under 0000:af:00.1: cvl_0_1 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:41.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:41.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:29:41.690 00:29:41.690 --- 10.0.0.2 ping statistics --- 00:29:41.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.690 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:41.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:41.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:29:41.690 00:29:41.690 --- 10.0.0.1 ping statistics --- 00:29:41.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.690 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:41.690 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:41.691 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:41.691 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:41.691 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:41.691 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:41.691 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:41.691 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:41.691 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=419250 00:29:41.691 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:41.691 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:41.691 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 419250 00:29:41.691 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 419250 ']' 00:29:41.691 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.691 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:41.691 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.691 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:41.691 13:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:41.949 [2024-07-25 13:57:38.615780] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:29:41.949 [2024-07-25 13:57:38.615829] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:41.949 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.949 [2024-07-25 13:57:38.656103] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:41.949 [2024-07-25 13:57:38.691261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:41.949 [2024-07-25 13:57:38.733148] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:41.949 [2024-07-25 13:57:38.733190] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:41.949 [2024-07-25 13:57:38.733200] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:41.949 [2024-07-25 13:57:38.733208] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:41.949 [2024-07-25 13:57:38.733231] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:41.949 [2024-07-25 13:57:38.733281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.949 [2024-07-25 13:57:38.733374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:41.950 [2024-07-25 13:57:38.733457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:41.950 [2024-07-25 13:57:38.733458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.889 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.890 [2024-07-25 13:57:39.432007] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.890 Malloc0 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.890 [2024-07-25 13:57:39.531047] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.890 [ 00:29:42.890 { 00:29:42.890 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:42.890 "subtype": "Discovery", 00:29:42.890 "listen_addresses": [ 00:29:42.890 { 00:29:42.890 "trtype": "TCP", 00:29:42.890 "adrfam": "IPv4", 00:29:42.890 "traddr": "10.0.0.2", 00:29:42.890 "trsvcid": "4420" 00:29:42.890 } 00:29:42.890 ], 00:29:42.890 "allow_any_host": true, 00:29:42.890 "hosts": [] 00:29:42.890 }, 00:29:42.890 { 00:29:42.890 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:42.890 "subtype": "NVMe", 00:29:42.890 "listen_addresses": [ 00:29:42.890 { 00:29:42.890 "trtype": "TCP", 00:29:42.890 "adrfam": "IPv4", 00:29:42.890 "traddr": "10.0.0.2", 00:29:42.890 "trsvcid": "4420" 00:29:42.890 } 00:29:42.890 ], 00:29:42.890 "allow_any_host": true, 00:29:42.890 "hosts": [], 00:29:42.890 "serial_number": "SPDK00000000000001", 00:29:42.890 "model_number": "SPDK bdev Controller", 00:29:42.890 "max_namespaces": 32, 00:29:42.890 "min_cntlid": 1, 00:29:42.890 "max_cntlid": 65519, 00:29:42.890 "namespaces": [ 00:29:42.890 { 00:29:42.890 "nsid": 1, 00:29:42.890 "bdev_name": "Malloc0", 00:29:42.890 "name": "Malloc0", 00:29:42.890 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:42.890 "eui64": "ABCDEF0123456789", 00:29:42.890 "uuid": "95053cc9-f6b7-4ea6-bead-048829938d58" 00:29:42.890 } 00:29:42.890 ] 00:29:42.890 } 00:29:42.890 ] 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.890 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:42.890 [2024-07-25 13:57:39.588203] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:29:42.890 [2024-07-25 13:57:39.588247] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid419296 ] 00:29:42.890 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.890 [2024-07-25 13:57:39.604236] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:42.890 [2024-07-25 13:57:39.620109] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:29:42.890 [2024-07-25 13:57:39.620157] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:42.890 [2024-07-25 13:57:39.620163] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:42.890 [2024-07-25 13:57:39.620177] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:42.890 [2024-07-25 13:57:39.620187] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:42.890 [2024-07-25 13:57:39.620569] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:29:42.890 [2024-07-25 13:57:39.620598] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1b00630 0 00:29:42.890 [2024-07-25 13:57:39.634728] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:42.890 [2024-07-25 13:57:39.634748] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:42.890 [2024-07-25 13:57:39.634755] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:42.890 [2024-07-25 13:57:39.634760] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:42.890 [2024-07-25 13:57:39.634804] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.890 [2024-07-25 13:57:39.634811] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.890 [2024-07-25 13:57:39.634816] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b00630) 00:29:42.890 [2024-07-25 13:57:39.634832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:42.890 [2024-07-25 13:57:39.634850] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4ef80, cid 0, qid 0 00:29:42.890 [2024-07-25 13:57:39.642727] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.890 [2024-07-25 13:57:39.642737] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.890 [2024-07-25 13:57:39.642742] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.890 [2024-07-25 13:57:39.642748] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4ef80) on tqpair=0x1b00630 00:29:42.890 [2024-07-25 13:57:39.642758] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:42.890 [2024-07-25 13:57:39.642765] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:29:42.890 [2024-07-25 13:57:39.642775] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:29:42.890 [2024-07-25 13:57:39.642790] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.890 [2024-07-25 13:57:39.642795] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.890 [2024-07-25 13:57:39.642799] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b00630) 00:29:42.890 [2024-07-25 13:57:39.642808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.890 [2024-07-25 13:57:39.642823] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4ef80, cid 0, qid 0 00:29:42.890 [2024-07-25 13:57:39.643017] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.890 [2024-07-25 13:57:39.643024] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.890 [2024-07-25 13:57:39.643028] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.890 [2024-07-25 13:57:39.643033] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4ef80) on tqpair=0x1b00630 00:29:42.890 [2024-07-25 13:57:39.643042] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:29:42.890 [2024-07-25 13:57:39.643052] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:29:42.890 [2024-07-25 13:57:39.643060] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.890 [2024-07-25 13:57:39.643065] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.890 [2024-07-25 13:57:39.643069] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b00630) 00:29:42.890 [2024-07-25 13:57:39.643077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.891 [2024-07-25 13:57:39.643090] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4ef80, cid 0, qid 0 00:29:42.891 [2024-07-25 13:57:39.643220] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.891 [2024-07-25 13:57:39.643227] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.891 [2024-07-25 13:57:39.643231] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.891 [2024-07-25 13:57:39.643236] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4ef80) on tqpair=0x1b00630 00:29:42.891 [2024-07-25 13:57:39.643243] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:29:42.891 [2024-07-25 13:57:39.643253] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:29:42.891 [2024-07-25 13:57:39.643260] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.891 [2024-07-25 13:57:39.643265] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.891 [2024-07-25 13:57:39.643269] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b00630) 00:29:42.891 [2024-07-25 13:57:39.643276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.891 [2024-07-25 13:57:39.643288] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4ef80, cid 0, qid 0 00:29:42.891 [2024-07-25 13:57:39.643420] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.891 [2024-07-25 13:57:39.643427] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.891 [2024-07-25 13:57:39.643432] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.891 [2024-07-25 13:57:39.643436] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4ef80) on tqpair=0x1b00630 00:29:42.891 [2024-07-25 13:57:39.643442] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:42.891 [2024-07-25 13:57:39.643453] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.891 [2024-07-25 13:57:39.643460] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.891 [2024-07-25 13:57:39.643465] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b00630) 00:29:42.891 [2024-07-25 13:57:39.643472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.891 [2024-07-25 13:57:39.643483] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4ef80, cid 0, qid 0 00:29:42.891 [2024-07-25 13:57:39.643653] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.891 [2024-07-25 13:57:39.643660] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.891 [2024-07-25 13:57:39.643664] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.891 [2024-07-25 13:57:39.643669] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4ef80) on tqpair=0x1b00630 00:29:42.891 [2024-07-25 13:57:39.643674] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:29:42.891 [2024-07-25 13:57:39.643681] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:29:42.891 [2024-07-25 13:57:39.643690] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:42.891 [2024-07-25 13:57:39.643796] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:29:42.891 [2024-07-25 13:57:39.643803] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:42.891 [2024-07-25 13:57:39.643813] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.891 [2024-07-25 13:57:39.643818] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.891 [2024-07-25 13:57:39.643823] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b00630) 00:29:42.891 [2024-07-25 13:57:39.643830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.891 [2024-07-25 13:57:39.643842] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4ef80, cid 0, qid 0 00:29:42.891 [2024-07-25 13:57:39.644013] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.891 [2024-07-25 13:57:39.644020] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.891 [2024-07-25 13:57:39.644025] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.891 [2024-07-25 13:57:39.644030] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4ef80) on tqpair=0x1b00630 00:29:42.891 [2024-07-25 13:57:39.644035] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:42.891 [2024-07-25 13:57:39.644046] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.891 [2024-07-25 13:57:39.644051] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.891 [2024-07-25 13:57:39.644055] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b00630) 00:29:42.891 [2024-07-25 13:57:39.644062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.891 [2024-07-25 13:57:39.644073] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4ef80, cid 0, qid 0 00:29:42.891 [2024-07-25 13:57:39.644202] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.891 [2024-07-25 13:57:39.644209] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.891 [2024-07-25 13:57:39.644214] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.891 [2024-07-25 13:57:39.644219] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4ef80) on tqpair=0x1b00630 00:29:42.891 [2024-07-25 13:57:39.644224] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:42.891 [2024-07-25 13:57:39.644232] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:29:42.891 [2024-07-25 13:57:39.644241] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:29:42.891 [2024-07-25 13:57:39.644251] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:29:42.891 [2024-07-25 13:57:39.644261] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.891 [2024-07-25 13:57:39.644265] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b00630) 00:29:42.891 [2024-07-25 13:57:39.644273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.891 [2024-07-25 13:57:39.644284] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4ef80, cid 0, qid 0 00:29:42.891 [2024-07-25 13:57:39.644493] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.891 [2024-07-25 13:57:39.644499] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.891 [2024-07-25 13:57:39.644504] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.891 [2024-07-25 13:57:39.644509] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b00630): datao=0, datal=4096, cccid=0 00:29:42.891 [2024-07-25 13:57:39.644515] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b4ef80) on tqpair(0x1b00630): expected_datao=0, payload_size=4096 00:29:42.891 [2024-07-25 13:57:39.644521] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.891 [2024-07-25 13:57:39.644530] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.891 [2024-07-25 13:57:39.644535] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.891 [2024-07-25 13:57:39.644580] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.891 [2024-07-25 13:57:39.644587] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.891 [2024-07-25 13:57:39.644591] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.891 [2024-07-25 13:57:39.644596] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4ef80) on tqpair=0x1b00630 00:29:42.891 [2024-07-25 13:57:39.644605] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:29:42.891 [2024-07-25 13:57:39.644611] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:29:42.891 [2024-07-25 13:57:39.644617] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:29:42.891 [2024-07-25 13:57:39.644624] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:29:42.891 [2024-07-25 13:57:39.644630] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:29:42.891 [2024-07-25 13:57:39.644636] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:29:42.891 [2024-07-25 13:57:39.644646] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:29:42.891 [2024-07-25 13:57:39.644656] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.891 [2024-07-25 13:57:39.644661] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.891 [2024-07-25 13:57:39.644666] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b00630) 00:29:42.891 [2024-07-25 13:57:39.644673] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:42.891 [2024-07-25 13:57:39.644686] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4ef80, cid 0, qid 0 00:29:42.891 [2024-07-25 13:57:39.644832] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.891 [2024-07-25 13:57:39.644841] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.891 [2024-07-25 13:57:39.644846] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.891 [2024-07-25 13:57:39.644851] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4ef80) on tqpair=0x1b00630 00:29:42.891 [2024-07-25 13:57:39.644858] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.891 [2024-07-25 13:57:39.644863] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.891 [2024-07-25 13:57:39.644867] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b00630) 00:29:42.891 [2024-07-25 13:57:39.644874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.891 [2024-07-25 13:57:39.644881] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.891 [2024-07-25 13:57:39.644885] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.891 [2024-07-25 13:57:39.644890] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1b00630) 00:29:42.892 [2024-07-25 13:57:39.644896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.892 [2024-07-25 13:57:39.644903] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.644908] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.644912] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1b00630) 00:29:42.892 [2024-07-25 13:57:39.644918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.892 [2024-07-25 13:57:39.644925] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.644930] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.644934] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b00630) 00:29:42.892 [2024-07-25 13:57:39.644940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.892 [2024-07-25 13:57:39.644946] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:29:42.892 [2024-07-25 13:57:39.644958] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:42.892 [2024-07-25 13:57:39.644966] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.644970] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b00630) 00:29:42.892 [2024-07-25 13:57:39.644977] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.892 [2024-07-25 13:57:39.644991] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4ef80, cid 0, qid 0 00:29:42.892 [2024-07-25 13:57:39.644997] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f100, cid 1, qid 0 00:29:42.892 [2024-07-25 13:57:39.645003] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f280, cid 2, qid 0 00:29:42.892 [2024-07-25 13:57:39.645008] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f400, cid 3, qid 0 00:29:42.892 [2024-07-25 13:57:39.645013] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f580, cid 4, qid 0 00:29:42.892 [2024-07-25 13:57:39.645158] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.892 [2024-07-25 13:57:39.645165] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.892 [2024-07-25 13:57:39.645170] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.645174] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4f580) on tqpair=0x1b00630 00:29:42.892 [2024-07-25 13:57:39.645180] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:29:42.892 [2024-07-25 13:57:39.645188] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:29:42.892 [2024-07-25 13:57:39.645200] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.645204] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b00630) 00:29:42.892 [2024-07-25 13:57:39.645211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.892 [2024-07-25 13:57:39.645223] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f580, cid 4, qid 0 00:29:42.892 [2024-07-25 13:57:39.645321] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.892 [2024-07-25 13:57:39.645327] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.892 [2024-07-25 13:57:39.645332] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.645336] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b00630): datao=0, datal=4096, cccid=4 00:29:42.892 [2024-07-25 13:57:39.645342] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b4f580) on tqpair(0x1b00630): expected_datao=0, payload_size=4096 00:29:42.892 [2024-07-25 13:57:39.645348] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.645436] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.645440] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.645534] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.892 [2024-07-25 13:57:39.645540] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.892 [2024-07-25 13:57:39.645545] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.645549] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4f580) on tqpair=0x1b00630 00:29:42.892 [2024-07-25 13:57:39.645563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:29:42.892 [2024-07-25 13:57:39.645586] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.645591] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b00630) 00:29:42.892 [2024-07-25 13:57:39.645598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.892 [2024-07-25 13:57:39.645606] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.645611] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.645615] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b00630) 00:29:42.892 [2024-07-25 13:57:39.645622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.892 [2024-07-25 13:57:39.645638] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f580, cid 4, qid 0 00:29:42.892 [2024-07-25 13:57:39.645644] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f700, cid 5, qid 0 00:29:42.892 [2024-07-25 13:57:39.645764] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.892 [2024-07-25 13:57:39.645771] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.892 [2024-07-25 13:57:39.645776] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.645780] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b00630): datao=0, datal=1024, cccid=4 00:29:42.892 [2024-07-25 13:57:39.645786] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b4f580) on tqpair(0x1b00630): expected_datao=0, payload_size=1024 00:29:42.892 [2024-07-25 13:57:39.645792] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.645799] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.645805] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.645811] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.892 [2024-07-25 13:57:39.645818] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.892 [2024-07-25 13:57:39.645822] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.645827] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4f700) on tqpair=0x1b00630 00:29:42.892 [2024-07-25 13:57:39.690723] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.892 [2024-07-25 13:57:39.690734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.892 [2024-07-25 13:57:39.690738] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.690743] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4f580) on tqpair=0x1b00630 00:29:42.892 [2024-07-25 13:57:39.690756] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.690761] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b00630) 00:29:42.892 [2024-07-25 13:57:39.690770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.892 [2024-07-25 13:57:39.690789] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f580, cid 4, qid 0 00:29:42.892 [2024-07-25 13:57:39.690901] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.892 [2024-07-25 13:57:39.690908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.892 [2024-07-25 13:57:39.690912] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.690917] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b00630): datao=0, datal=3072, cccid=4 00:29:42.892 [2024-07-25 13:57:39.690923] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b4f580) on tqpair(0x1b00630): expected_datao=0, payload_size=3072 00:29:42.892 [2024-07-25 13:57:39.690928] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.690936] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.690941] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.691016] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.892 [2024-07-25 13:57:39.691023] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.892 [2024-07-25 13:57:39.691027] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.691032] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4f580) on tqpair=0x1b00630 00:29:42.892 [2024-07-25 13:57:39.691041] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.691046] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b00630) 00:29:42.892 [2024-07-25 13:57:39.691054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.892 [2024-07-25 13:57:39.691070] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f580, cid 4, qid 0 00:29:42.892 [2024-07-25 13:57:39.691169] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.892 [2024-07-25 13:57:39.691176] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.892 [2024-07-25 13:57:39.691181] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.691185] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b00630): datao=0, datal=8, cccid=4 00:29:42.892 [2024-07-25 13:57:39.691191] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b4f580) on tqpair(0x1b00630): expected_datao=0, payload_size=8 00:29:42.892 [2024-07-25 13:57:39.691197] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.691203] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.691208] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.731967] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.892 [2024-07-25 13:57:39.731980] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.892 [2024-07-25 13:57:39.731985] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.892 [2024-07-25 13:57:39.731990] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4f580) on tqpair=0x1b00630 00:29:42.892 ===================================================== 00:29:42.893 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:42.893 ===================================================== 00:29:42.893 Controller Capabilities/Features 00:29:42.893 ================================ 00:29:42.893 Vendor ID: 0000 00:29:42.893 Subsystem Vendor ID: 0000 00:29:42.893 Serial Number: .................... 00:29:42.893 Model Number: ........................................ 00:29:42.893 Firmware Version: 24.09 00:29:42.893 Recommended Arb Burst: 0 00:29:42.893 IEEE OUI Identifier: 00 00 00 00:29:42.893 Multi-path I/O 00:29:42.893 May have multiple subsystem ports: No 00:29:42.893 May have multiple controllers: No 00:29:42.893 Associated with SR-IOV VF: No 00:29:42.893 Max Data Transfer Size: 131072 00:29:42.893 Max Number of Namespaces: 0 00:29:42.893 Max Number of I/O Queues: 1024 00:29:42.893 NVMe Specification Version (VS): 1.3 00:29:42.893 NVMe Specification Version (Identify): 1.3 00:29:42.893 Maximum Queue Entries: 128 00:29:42.893 Contiguous Queues Required: Yes 00:29:42.893 Arbitration Mechanisms Supported 00:29:42.893 Weighted Round Robin: Not Supported 00:29:42.893 Vendor Specific: Not Supported 00:29:42.893 Reset Timeout: 15000 ms 00:29:42.893 Doorbell Stride: 4 bytes 00:29:42.893 NVM Subsystem Reset: Not Supported 00:29:42.893 Command Sets Supported 00:29:42.893 NVM Command Set: Supported 00:29:42.893 Boot Partition: Not Supported 00:29:42.893 Memory Page Size Minimum: 4096 bytes 00:29:42.893 Memory Page Size Maximum: 4096 bytes 00:29:42.893 Persistent Memory Region: Not Supported 00:29:42.893 Optional Asynchronous Events Supported 00:29:42.893 Namespace Attribute Notices: Not Supported 00:29:42.893 Firmware Activation Notices: Not Supported 00:29:42.893 ANA Change Notices: Not Supported 00:29:42.893 PLE Aggregate Log Change Notices: Not Supported 00:29:42.893 LBA Status Info Alert Notices: Not Supported 00:29:42.893 EGE Aggregate Log Change Notices: Not Supported 00:29:42.893 Normal NVM Subsystem Shutdown event: Not Supported 00:29:42.893 Zone Descriptor Change Notices: Not Supported 00:29:42.893 Discovery Log Change Notices: Supported 00:29:42.893 Controller Attributes 00:29:42.893 128-bit Host Identifier: Not Supported 00:29:42.893 Non-Operational Permissive Mode: Not Supported 00:29:42.893 NVM Sets: Not Supported 00:29:42.893 Read Recovery Levels: Not Supported 00:29:42.893 Endurance Groups: Not Supported 00:29:42.893 Predictable Latency Mode: Not Supported 00:29:42.893 Traffic Based Keep ALive: Not Supported 00:29:42.893 Namespace Granularity: Not Supported 00:29:42.893 SQ Associations: Not Supported 00:29:42.893 UUID List: Not Supported 00:29:42.893 Multi-Domain Subsystem: Not Supported 00:29:42.893 Fixed Capacity Management: Not Supported 00:29:42.893 Variable Capacity Management: Not Supported 00:29:42.893 Delete Endurance Group: Not Supported 00:29:42.893 Delete NVM Set: Not Supported 00:29:42.893 Extended LBA Formats Supported: Not Supported 00:29:42.893 Flexible Data Placement Supported: Not Supported 00:29:42.893 00:29:42.893 Controller Memory Buffer Support 00:29:42.893 ================================ 00:29:42.893 Supported: No 00:29:42.893 00:29:42.893 Persistent Memory Region Support 00:29:42.893 ================================ 00:29:42.893 Supported: No 00:29:42.893 00:29:42.893 Admin Command Set Attributes 00:29:42.893 ============================ 00:29:42.893 Security Send/Receive: Not Supported 00:29:42.893 Format NVM: Not Supported 00:29:42.893 Firmware Activate/Download: Not Supported 00:29:42.893 Namespace Management: Not Supported 00:29:42.893 Device Self-Test: Not Supported 00:29:42.893 Directives: Not Supported 00:29:42.893 NVMe-MI: Not Supported 00:29:42.893 Virtualization Management: Not Supported 00:29:42.893 Doorbell Buffer Config: Not Supported 00:29:42.893 Get LBA Status Capability: Not Supported 00:29:42.893 Command & Feature Lockdown Capability: Not Supported 00:29:42.893 Abort Command Limit: 1 00:29:42.893 Async Event Request Limit: 4 00:29:42.893 Number of Firmware Slots: N/A 00:29:42.893 Firmware Slot 1 Read-Only: N/A 00:29:42.893 Firmware Activation Without Reset: N/A 00:29:42.893 Multiple Update Detection Support: N/A 00:29:42.893 Firmware Update Granularity: No Information Provided 00:29:42.893 Per-Namespace SMART Log: No 00:29:42.893 Asymmetric Namespace Access Log Page: Not Supported 00:29:42.893 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:42.893 Command Effects Log Page: Not Supported 00:29:42.893 Get Log Page Extended Data: Supported 00:29:42.893 Telemetry Log Pages: Not Supported 00:29:42.893 Persistent Event Log Pages: Not Supported 00:29:42.893 Supported Log Pages Log Page: May Support 00:29:42.893 Commands Supported & Effects Log Page: Not Supported 00:29:42.893 Feature Identifiers & Effects Log Page:May Support 00:29:42.893 NVMe-MI Commands & Effects Log Page: May Support 00:29:42.893 Data Area 4 for Telemetry Log: Not Supported 00:29:42.893 Error Log Page Entries Supported: 128 00:29:42.893 Keep Alive: Not Supported 00:29:42.893 00:29:42.893 NVM Command Set Attributes 00:29:42.893 ========================== 00:29:42.893 Submission Queue Entry Size 00:29:42.893 Max: 1 00:29:42.893 Min: 1 00:29:42.893 Completion Queue Entry Size 00:29:42.893 Max: 1 00:29:42.893 Min: 1 00:29:42.893 Number of Namespaces: 0 00:29:42.893 Compare Command: Not Supported 00:29:42.893 Write Uncorrectable Command: Not Supported 00:29:42.893 Dataset Management Command: Not Supported 00:29:42.893 Write Zeroes Command: Not Supported 00:29:42.893 Set Features Save Field: Not Supported 00:29:42.893 Reservations: Not Supported 00:29:42.893 Timestamp: Not Supported 00:29:42.893 Copy: Not Supported 00:29:42.893 Volatile Write Cache: Not Present 00:29:42.893 Atomic Write Unit (Normal): 1 00:29:42.893 Atomic Write Unit (PFail): 1 00:29:42.893 Atomic Compare & Write Unit: 1 00:29:42.893 Fused Compare & Write: Supported 00:29:42.893 Scatter-Gather List 00:29:42.893 SGL Command Set: Supported 00:29:42.893 SGL Keyed: Supported 00:29:42.893 SGL Bit Bucket Descriptor: Not Supported 00:29:42.893 SGL Metadata Pointer: Not Supported 00:29:42.893 Oversized SGL: Not Supported 00:29:42.893 SGL Metadata Address: Not Supported 00:29:42.893 SGL Offset: Supported 00:29:42.893 Transport SGL Data Block: Not Supported 00:29:42.893 Replay Protected Memory Block: Not Supported 00:29:42.893 00:29:42.893 Firmware Slot Information 00:29:42.893 ========================= 00:29:42.893 Active slot: 0 00:29:42.893 00:29:42.893 00:29:42.893 Error Log 00:29:42.893 ========= 00:29:42.893 00:29:42.893 Active Namespaces 00:29:42.893 ================= 00:29:42.893 Discovery Log Page 00:29:42.893 ================== 00:29:42.893 Generation Counter: 2 00:29:42.893 Number of Records: 2 00:29:42.893 Record Format: 0 00:29:42.893 00:29:42.893 Discovery Log Entry 0 00:29:42.893 ---------------------- 00:29:42.893 Transport Type: 3 (TCP) 00:29:42.893 Address Family: 1 (IPv4) 00:29:42.893 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:42.893 Entry Flags: 00:29:42.893 Duplicate Returned Information: 1 00:29:42.894 Explicit Persistent Connection Support for Discovery: 1 00:29:42.894 Transport Requirements: 00:29:42.894 Secure Channel: Not Required 00:29:42.894 Port ID: 0 (0x0000) 00:29:42.894 Controller ID: 65535 (0xffff) 00:29:42.894 Admin Max SQ Size: 128 00:29:42.894 Transport Service Identifier: 4420 00:29:42.894 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:42.894 Transport Address: 10.0.0.2 00:29:42.894 Discovery Log Entry 1 00:29:42.894 ---------------------- 00:29:42.894 Transport Type: 3 (TCP) 00:29:42.894 Address Family: 1 (IPv4) 00:29:42.894 Subsystem Type: 2 (NVM Subsystem) 00:29:42.894 Entry Flags: 00:29:42.894 Duplicate Returned Information: 0 00:29:42.894 Explicit Persistent Connection Support for Discovery: 0 00:29:42.894 Transport Requirements: 00:29:42.894 Secure Channel: Not Required 00:29:42.894 Port ID: 0 (0x0000) 00:29:42.894 Controller ID: 65535 (0xffff) 00:29:42.894 Admin Max SQ Size: 128 00:29:42.894 Transport Service Identifier: 4420 00:29:42.894 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:42.894 Transport Address: 10.0.0.2 [2024-07-25 13:57:39.732076] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:29:42.894 [2024-07-25 13:57:39.732088] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4ef80) on tqpair=0x1b00630 00:29:42.894 [2024-07-25 13:57:39.732096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.894 [2024-07-25 13:57:39.732103] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4f100) on tqpair=0x1b00630 00:29:42.894 [2024-07-25 13:57:39.732108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.894 [2024-07-25 13:57:39.732114] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4f280) on tqpair=0x1b00630 00:29:42.894 [2024-07-25 13:57:39.732120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.894 [2024-07-25 13:57:39.732126] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4f400) on tqpair=0x1b00630 00:29:42.894 [2024-07-25 13:57:39.732131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.894 [2024-07-25 13:57:39.732143] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.894 [2024-07-25 13:57:39.732148] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.894 [2024-07-25 13:57:39.732153] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b00630) 00:29:42.894 [2024-07-25 13:57:39.732161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.894 [2024-07-25 13:57:39.732176] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f400, cid 3, qid 0 00:29:42.894 [2024-07-25 13:57:39.732270] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.894 [2024-07-25 13:57:39.732277] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.894 [2024-07-25 13:57:39.732281] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.894 [2024-07-25 13:57:39.732286] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4f400) on tqpair=0x1b00630 00:29:42.894 [2024-07-25 13:57:39.732294] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.894 [2024-07-25 13:57:39.732299] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.894 [2024-07-25 13:57:39.732303] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b00630) 00:29:42.894 [2024-07-25 13:57:39.732311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.894 [2024-07-25 13:57:39.732326] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f400, cid 3, qid 0 00:29:42.894 [2024-07-25 13:57:39.732432] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.894 [2024-07-25 13:57:39.732439] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.894 [2024-07-25 13:57:39.732443] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.894 [2024-07-25 13:57:39.732448] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4f400) on tqpair=0x1b00630 00:29:42.894 [2024-07-25 13:57:39.732454] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:29:42.894 [2024-07-25 13:57:39.732460] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:29:42.894 [2024-07-25 13:57:39.732471] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.894 [2024-07-25 13:57:39.732477] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.894 [2024-07-25 13:57:39.732482] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b00630) 00:29:42.894 [2024-07-25 13:57:39.732489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.894 [2024-07-25 13:57:39.732501] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f400, cid 3, qid 0 00:29:42.894 [2024-07-25 13:57:39.732599] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.894 [2024-07-25 13:57:39.732605] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.894 [2024-07-25 13:57:39.732610] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.894 [2024-07-25 13:57:39.732615] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4f400) on tqpair=0x1b00630 00:29:42.894 [2024-07-25 13:57:39.732625] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.894 [2024-07-25 13:57:39.732630] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.894 [2024-07-25 13:57:39.732635] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b00630) 00:29:42.894 [2024-07-25 13:57:39.732642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.894 [2024-07-25 13:57:39.732653] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f400, cid 3, qid 0 00:29:42.894 [2024-07-25 13:57:39.732751] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.894 [2024-07-25 13:57:39.732758] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.894 [2024-07-25 13:57:39.732763] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.894 [2024-07-25 13:57:39.732767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4f400) on tqpair=0x1b00630 00:29:42.894 [2024-07-25 13:57:39.732778] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.894 [2024-07-25 13:57:39.732783] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.894 [2024-07-25 13:57:39.732787] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b00630) 00:29:42.894 [2024-07-25 13:57:39.732794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.894 [2024-07-25 13:57:39.732806] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f400, cid 3, qid 0 00:29:42.894 [2024-07-25 13:57:39.732901] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.894 [2024-07-25 13:57:39.732908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.894 [2024-07-25 13:57:39.732912] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.894 [2024-07-25 13:57:39.732917] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4f400) on tqpair=0x1b00630 00:29:42.894 [2024-07-25 13:57:39.732927] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.894 [2024-07-25 13:57:39.732932] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.894 [2024-07-25 13:57:39.732936] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b00630) 00:29:42.894 [2024-07-25 13:57:39.732943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.894 [2024-07-25 13:57:39.732954] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f400, cid 3, qid 0 00:29:42.894 [2024-07-25 13:57:39.733049] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.894 [2024-07-25 13:57:39.733056] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.894 [2024-07-25 13:57:39.733060] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.894 [2024-07-25 13:57:39.733065] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4f400) on tqpair=0x1b00630 00:29:42.894 [2024-07-25 13:57:39.733075] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.894 [2024-07-25 13:57:39.733080] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.894 [2024-07-25 13:57:39.733084] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b00630) 00:29:42.894 [2024-07-25 13:57:39.733093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.894 [2024-07-25 13:57:39.733104] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f400, cid 3, qid 0 00:29:42.894 [2024-07-25 13:57:39.733201] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.894 [2024-07-25 13:57:39.733208] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.894 [2024-07-25 13:57:39.733213] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.894 [2024-07-25 13:57:39.733217] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4f400) on tqpair=0x1b00630 00:29:42.894 [2024-07-25 13:57:39.733227] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.894 [2024-07-25 13:57:39.733232] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.894 [2024-07-25 13:57:39.733237] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b00630) 00:29:42.895 [2024-07-25 13:57:39.733244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.895 [2024-07-25 13:57:39.733255] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f400, cid 3, qid 0 00:29:42.895 [2024-07-25 13:57:39.733349] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.895 [2024-07-25 13:57:39.733356] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.895 [2024-07-25 13:57:39.733361] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.895 [2024-07-25 13:57:39.733365] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4f400) on tqpair=0x1b00630 00:29:42.895 [2024-07-25 13:57:39.733375] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.895 [2024-07-25 13:57:39.733380] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.895 [2024-07-25 13:57:39.733385] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b00630) 00:29:42.895 [2024-07-25 13:57:39.733392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.895 [2024-07-25 13:57:39.733403] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f400, cid 3, qid 0 00:29:42.895 [2024-07-25 13:57:39.733494] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.895 [2024-07-25 13:57:39.733501] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.895 [2024-07-25 13:57:39.733505] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.895 [2024-07-25 13:57:39.733510] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4f400) on tqpair=0x1b00630 00:29:42.895 [2024-07-25 13:57:39.733520] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.895 [2024-07-25 13:57:39.733525] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.895 [2024-07-25 13:57:39.733529] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b00630) 00:29:42.895 [2024-07-25 13:57:39.733536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.895 [2024-07-25 13:57:39.733547] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f400, cid 3, qid 0 00:29:42.895 [2024-07-25 13:57:39.733638] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.895 [2024-07-25 13:57:39.733644] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.895 [2024-07-25 13:57:39.733649] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.895 [2024-07-25 13:57:39.733653] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4f400) on tqpair=0x1b00630 00:29:42.895 [2024-07-25 13:57:39.733663] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.895 [2024-07-25 13:57:39.733668] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.895 [2024-07-25 13:57:39.733673] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b00630) 00:29:42.895 [2024-07-25 13:57:39.733681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.895 [2024-07-25 13:57:39.733693] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f400, cid 3, qid 0 00:29:42.895 [2024-07-25 13:57:39.737722] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.895 [2024-07-25 13:57:39.737733] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.895 [2024-07-25 13:57:39.737737] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.895 [2024-07-25 13:57:39.737742] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4f400) on tqpair=0x1b00630 00:29:42.895 [2024-07-25 13:57:39.737754] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.895 [2024-07-25 13:57:39.737759] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.895 [2024-07-25 13:57:39.737764] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b00630) 00:29:42.895 [2024-07-25 13:57:39.737771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.895 [2024-07-25 13:57:39.737785] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b4f400, cid 3, qid 0 00:29:42.895 [2024-07-25 13:57:39.738537] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.895 [2024-07-25 13:57:39.738544] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.895 [2024-07-25 13:57:39.738549] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.895 [2024-07-25 13:57:39.738553] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b4f400) on tqpair=0x1b00630 00:29:42.895 [2024-07-25 13:57:39.738562] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:29:42.895 00:29:42.895 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:43.158 [2024-07-25 13:57:39.782374] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:29:43.158 [2024-07-25 13:57:39.782427] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid419374 ] 00:29:43.158 EAL: No free 2048 kB hugepages reported on node 1 00:29:43.158 [2024-07-25 13:57:39.797984] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:43.158 [2024-07-25 13:57:39.813805] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:29:43.158 [2024-07-25 13:57:39.813845] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:43.158 [2024-07-25 13:57:39.813851] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:43.158 [2024-07-25 13:57:39.813864] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:43.158 [2024-07-25 13:57:39.813873] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:43.158 [2024-07-25 13:57:39.814257] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:29:43.158 [2024-07-25 13:57:39.814284] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x11f9630 0 00:29:43.158 [2024-07-25 13:57:39.828724] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:43.158 [2024-07-25 13:57:39.828741] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:43.158 [2024-07-25 13:57:39.828747] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:43.158 [2024-07-25 13:57:39.828754] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:43.158 [2024-07-25 13:57:39.828790] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.158 [2024-07-25 13:57:39.828796] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.158 [2024-07-25 13:57:39.828801] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f9630) 00:29:43.158 [2024-07-25 13:57:39.828813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:43.158 [2024-07-25 13:57:39.828829] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1247f80, cid 0, qid 0 00:29:43.158 [2024-07-25 13:57:39.836728] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.158 [2024-07-25 13:57:39.836737] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.158 [2024-07-25 13:57:39.836743] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.158 [2024-07-25 13:57:39.836748] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1247f80) on tqpair=0x11f9630 00:29:43.158 [2024-07-25 13:57:39.836760] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:43.158 [2024-07-25 13:57:39.836767] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:29:43.158 [2024-07-25 13:57:39.836773] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:29:43.158 [2024-07-25 13:57:39.836786] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.158 [2024-07-25 13:57:39.836791] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.158 [2024-07-25 13:57:39.836796] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f9630) 00:29:43.158 [2024-07-25 13:57:39.836804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.158 [2024-07-25 13:57:39.836818] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1247f80, cid 0, qid 0 00:29:43.158 [2024-07-25 13:57:39.836978] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.158 [2024-07-25 13:57:39.836985] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.158 [2024-07-25 13:57:39.836990] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.158 [2024-07-25 13:57:39.836995] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1247f80) on tqpair=0x11f9630 00:29:43.158 [2024-07-25 13:57:39.837003] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:29:43.158 [2024-07-25 13:57:39.837013] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:29:43.158 [2024-07-25 13:57:39.837021] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.158 [2024-07-25 13:57:39.837026] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.158 [2024-07-25 13:57:39.837030] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f9630) 00:29:43.158 [2024-07-25 13:57:39.837037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.158 [2024-07-25 13:57:39.837050] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1247f80, cid 0, qid 0 00:29:43.158 [2024-07-25 13:57:39.837147] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.158 [2024-07-25 13:57:39.837154] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.158 [2024-07-25 13:57:39.837158] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.158 [2024-07-25 13:57:39.837163] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1247f80) on tqpair=0x11f9630 00:29:43.159 [2024-07-25 13:57:39.837169] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:29:43.159 [2024-07-25 13:57:39.837178] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:29:43.159 [2024-07-25 13:57:39.837188] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.837193] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.837197] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f9630) 00:29:43.159 [2024-07-25 13:57:39.837204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.159 [2024-07-25 13:57:39.837216] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1247f80, cid 0, qid 0 00:29:43.159 [2024-07-25 13:57:39.837306] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.159 [2024-07-25 13:57:39.837313] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.159 [2024-07-25 13:57:39.837317] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.837322] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1247f80) on tqpair=0x11f9630 00:29:43.159 [2024-07-25 13:57:39.837327] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:43.159 [2024-07-25 13:57:39.837338] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.837343] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.837348] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f9630) 00:29:43.159 [2024-07-25 13:57:39.837355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.159 [2024-07-25 13:57:39.837366] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1247f80, cid 0, qid 0 00:29:43.159 [2024-07-25 13:57:39.837536] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.159 [2024-07-25 13:57:39.837542] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.159 [2024-07-25 13:57:39.837547] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.837551] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1247f80) on tqpair=0x11f9630 00:29:43.159 [2024-07-25 13:57:39.837556] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:29:43.159 [2024-07-25 13:57:39.837562] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:29:43.159 [2024-07-25 13:57:39.837571] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:43.159 [2024-07-25 13:57:39.837678] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:29:43.159 [2024-07-25 13:57:39.837683] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:43.159 [2024-07-25 13:57:39.837691] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.837696] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.837700] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f9630) 00:29:43.159 [2024-07-25 13:57:39.837707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.159 [2024-07-25 13:57:39.837727] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1247f80, cid 0, qid 0 00:29:43.159 [2024-07-25 13:57:39.837819] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.159 [2024-07-25 13:57:39.837833] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.159 [2024-07-25 13:57:39.837838] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.837842] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1247f80) on tqpair=0x11f9630 00:29:43.159 [2024-07-25 13:57:39.837848] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:43.159 [2024-07-25 13:57:39.837862] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.837867] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.837871] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f9630) 00:29:43.159 [2024-07-25 13:57:39.837878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.159 [2024-07-25 13:57:39.837891] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1247f80, cid 0, qid 0 00:29:43.159 [2024-07-25 13:57:39.837982] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.159 [2024-07-25 13:57:39.837989] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.159 [2024-07-25 13:57:39.837993] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.837998] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1247f80) on tqpair=0x11f9630 00:29:43.159 [2024-07-25 13:57:39.838003] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:43.159 [2024-07-25 13:57:39.838009] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:29:43.159 [2024-07-25 13:57:39.838019] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:29:43.159 [2024-07-25 13:57:39.838028] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:29:43.159 [2024-07-25 13:57:39.838037] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.838042] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f9630) 00:29:43.159 [2024-07-25 13:57:39.838049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.159 [2024-07-25 13:57:39.838060] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1247f80, cid 0, qid 0 00:29:43.159 [2024-07-25 13:57:39.838207] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:43.159 [2024-07-25 13:57:39.838214] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:43.159 [2024-07-25 13:57:39.838218] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.838223] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f9630): datao=0, datal=4096, cccid=0 00:29:43.159 [2024-07-25 13:57:39.838229] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1247f80) on tqpair(0x11f9630): expected_datao=0, payload_size=4096 00:29:43.159 [2024-07-25 13:57:39.838235] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.838242] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.838247] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.878861] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.159 [2024-07-25 13:57:39.878877] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.159 [2024-07-25 13:57:39.878882] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.878887] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1247f80) on tqpair=0x11f9630 00:29:43.159 [2024-07-25 13:57:39.878896] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:29:43.159 [2024-07-25 13:57:39.878902] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:29:43.159 [2024-07-25 13:57:39.878908] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:29:43.159 [2024-07-25 13:57:39.878913] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:29:43.159 [2024-07-25 13:57:39.878922] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:29:43.159 [2024-07-25 13:57:39.878929] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:29:43.159 [2024-07-25 13:57:39.878939] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:29:43.159 [2024-07-25 13:57:39.878950] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.878956] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.878960] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f9630) 00:29:43.159 [2024-07-25 13:57:39.878969] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:43.159 [2024-07-25 13:57:39.878983] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1247f80, cid 0, qid 0 00:29:43.159 [2024-07-25 13:57:39.879075] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.159 [2024-07-25 13:57:39.879082] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.159 [2024-07-25 13:57:39.879086] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.879091] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1247f80) on tqpair=0x11f9630 00:29:43.159 [2024-07-25 13:57:39.879099] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.879103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.879108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f9630) 00:29:43.159 [2024-07-25 13:57:39.879115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.159 [2024-07-25 13:57:39.879122] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.879126] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.879131] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x11f9630) 00:29:43.159 [2024-07-25 13:57:39.879137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.159 [2024-07-25 13:57:39.879144] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.879148] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.879153] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x11f9630) 00:29:43.159 [2024-07-25 13:57:39.879159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.159 [2024-07-25 13:57:39.879166] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.879170] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.159 [2024-07-25 13:57:39.879175] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f9630) 00:29:43.160 [2024-07-25 13:57:39.879181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.160 [2024-07-25 13:57:39.879187] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:43.160 [2024-07-25 13:57:39.879199] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:43.160 [2024-07-25 13:57:39.879206] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.160 [2024-07-25 13:57:39.879211] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11f9630) 00:29:43.160 [2024-07-25 13:57:39.879218] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.160 [2024-07-25 13:57:39.879233] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1247f80, cid 0, qid 0 00:29:43.160 [2024-07-25 13:57:39.879239] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248100, cid 1, qid 0 00:29:43.160 [2024-07-25 13:57:39.879244] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248280, cid 2, qid 0 00:29:43.160 [2024-07-25 13:57:39.879250] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248400, cid 3, qid 0 00:29:43.160 [2024-07-25 13:57:39.879255] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248580, cid 4, qid 0 00:29:43.160 [2024-07-25 13:57:39.879373] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.160 [2024-07-25 13:57:39.879379] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.160 [2024-07-25 13:57:39.879384] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.160 [2024-07-25 13:57:39.879389] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248580) on tqpair=0x11f9630 00:29:43.160 [2024-07-25 13:57:39.879395] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:29:43.160 [2024-07-25 13:57:39.879401] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:43.160 [2024-07-25 13:57:39.879412] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:29:43.160 [2024-07-25 13:57:39.879420] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:43.160 [2024-07-25 13:57:39.879427] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.160 [2024-07-25 13:57:39.879432] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.160 [2024-07-25 13:57:39.879436] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11f9630) 00:29:43.160 [2024-07-25 13:57:39.879443] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:43.160 [2024-07-25 13:57:39.879455] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248580, cid 4, qid 0 00:29:43.160 [2024-07-25 13:57:39.879543] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.160 [2024-07-25 13:57:39.879550] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.160 [2024-07-25 13:57:39.879554] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.160 [2024-07-25 13:57:39.879559] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248580) on tqpair=0x11f9630 00:29:43.160 [2024-07-25 13:57:39.879612] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:29:43.160 [2024-07-25 13:57:39.879623] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:43.160 [2024-07-25 13:57:39.879631] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.160 [2024-07-25 13:57:39.879636] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11f9630) 00:29:43.160 [2024-07-25 13:57:39.879643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.160 [2024-07-25 13:57:39.879655] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248580, cid 4, qid 0 00:29:43.160 [2024-07-25 13:57:39.879860] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:43.160 [2024-07-25 13:57:39.879868] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:43.160 [2024-07-25 13:57:39.879872] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:43.160 [2024-07-25 13:57:39.879877] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f9630): datao=0, datal=4096, cccid=4 00:29:43.160 [2024-07-25 13:57:39.879883] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1248580) on tqpair(0x11f9630): expected_datao=0, payload_size=4096 00:29:43.160 [2024-07-25 13:57:39.879890] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.160 [2024-07-25 13:57:39.879898] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:43.160 [2024-07-25 13:57:39.879903] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:43.160 [2024-07-25 13:57:39.880004] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.160 [2024-07-25 13:57:39.880011] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.160 [2024-07-25 13:57:39.880015] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.160 [2024-07-25 13:57:39.880020] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248580) on tqpair=0x11f9630 00:29:43.160 [2024-07-25 13:57:39.880034] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:29:43.160 [2024-07-25 13:57:39.880046] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:29:43.160 [2024-07-25 13:57:39.880057] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:29:43.160 [2024-07-25 13:57:39.880064] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.160 [2024-07-25 13:57:39.880069] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11f9630) 00:29:43.160 [2024-07-25 13:57:39.880076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.160 [2024-07-25 13:57:39.880089] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248580, cid 4, qid 0 00:29:43.160 [2024-07-25 13:57:39.880204] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:43.160 [2024-07-25 13:57:39.880211] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:43.160 [2024-07-25 13:57:39.880215] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:43.160 [2024-07-25 13:57:39.880220] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f9630): datao=0, datal=4096, cccid=4 00:29:43.160 [2024-07-25 13:57:39.880227] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1248580) on tqpair(0x11f9630): expected_datao=0, payload_size=4096 00:29:43.160 [2024-07-25 13:57:39.880232] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.160 [2024-07-25 13:57:39.880240] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:43.160 [2024-07-25 13:57:39.880244] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:43.160 [2024-07-25 13:57:39.880345] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.160 [2024-07-25 13:57:39.880351] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.160 [2024-07-25 13:57:39.880356] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.160 [2024-07-25 13:57:39.880360] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248580) on tqpair=0x11f9630 00:29:43.160 [2024-07-25 13:57:39.880374] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:43.160 [2024-07-25 13:57:39.880385] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:43.160 [2024-07-25 13:57:39.880393] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.160 [2024-07-25 13:57:39.880397] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11f9630) 00:29:43.160 [2024-07-25 13:57:39.880404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.160 [2024-07-25 13:57:39.880417] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248580, cid 4, qid 0 00:29:43.160 [2024-07-25 13:57:39.880521] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:43.160 [2024-07-25 13:57:39.880528] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:43.160 [2024-07-25 13:57:39.880535] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:43.160 [2024-07-25 13:57:39.880540] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f9630): datao=0, datal=4096, cccid=4 00:29:43.160 [2024-07-25 13:57:39.880545] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1248580) on tqpair(0x11f9630): expected_datao=0, payload_size=4096 00:29:43.160 [2024-07-25 13:57:39.880551] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.160 [2024-07-25 13:57:39.880558] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:43.160 [2024-07-25 13:57:39.880562] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:43.160 [2024-07-25 13:57:39.880661] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.160 [2024-07-25 13:57:39.880667] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.160 [2024-07-25 13:57:39.880672] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.160 [2024-07-25 13:57:39.880677] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248580) on tqpair=0x11f9630 00:29:43.160 [2024-07-25 13:57:39.880685] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:43.160 [2024-07-25 13:57:39.880695] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:29:43.160 [2024-07-25 13:57:39.880704] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:29:43.160 [2024-07-25 13:57:39.884720] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:43.160 [2024-07-25 13:57:39.884730] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:43.160 [2024-07-25 13:57:39.884737] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:29:43.160 [2024-07-25 13:57:39.884743] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:29:43.160 [2024-07-25 13:57:39.884749] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:29:43.160 [2024-07-25 13:57:39.884756] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:29:43.161 [2024-07-25 13:57:39.884771] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.884776] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11f9630) 00:29:43.161 [2024-07-25 13:57:39.884784] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.161 [2024-07-25 13:57:39.884792] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.884796] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.884801] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11f9630) 00:29:43.161 [2024-07-25 13:57:39.884808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.161 [2024-07-25 13:57:39.884825] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248580, cid 4, qid 0 00:29:43.161 [2024-07-25 13:57:39.884831] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248700, cid 5, qid 0 00:29:43.161 [2024-07-25 13:57:39.885003] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.161 [2024-07-25 13:57:39.885010] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.161 [2024-07-25 13:57:39.885015] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.885020] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248580) on tqpair=0x11f9630 00:29:43.161 [2024-07-25 13:57:39.885030] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.161 [2024-07-25 13:57:39.885036] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.161 [2024-07-25 13:57:39.885041] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.885046] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248700) on tqpair=0x11f9630 00:29:43.161 [2024-07-25 13:57:39.885056] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.885061] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11f9630) 00:29:43.161 [2024-07-25 13:57:39.885068] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.161 [2024-07-25 13:57:39.885080] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248700, cid 5, qid 0 00:29:43.161 [2024-07-25 13:57:39.885264] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.161 [2024-07-25 13:57:39.885271] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.161 [2024-07-25 13:57:39.885275] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.885280] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248700) on tqpair=0x11f9630 00:29:43.161 [2024-07-25 13:57:39.885290] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.885295] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11f9630) 00:29:43.161 [2024-07-25 13:57:39.885302] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.161 [2024-07-25 13:57:39.885313] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248700, cid 5, qid 0 00:29:43.161 [2024-07-25 13:57:39.885401] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.161 [2024-07-25 13:57:39.885408] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.161 [2024-07-25 13:57:39.885413] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.885417] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248700) on tqpair=0x11f9630 00:29:43.161 [2024-07-25 13:57:39.885428] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.885433] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11f9630) 00:29:43.161 [2024-07-25 13:57:39.885439] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.161 [2024-07-25 13:57:39.885450] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248700, cid 5, qid 0 00:29:43.161 [2024-07-25 13:57:39.885543] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.161 [2024-07-25 13:57:39.885549] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.161 [2024-07-25 13:57:39.885554] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.885559] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248700) on tqpair=0x11f9630 00:29:43.161 [2024-07-25 13:57:39.885575] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.885580] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11f9630) 00:29:43.161 [2024-07-25 13:57:39.885587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.161 [2024-07-25 13:57:39.885594] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.885599] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11f9630) 00:29:43.161 [2024-07-25 13:57:39.885606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.161 [2024-07-25 13:57:39.885613] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.885620] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x11f9630) 00:29:43.161 [2024-07-25 13:57:39.885627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.161 [2024-07-25 13:57:39.885634] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.885639] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x11f9630) 00:29:43.161 [2024-07-25 13:57:39.885645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.161 [2024-07-25 13:57:39.885658] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248700, cid 5, qid 0 00:29:43.161 [2024-07-25 13:57:39.885664] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248580, cid 4, qid 0 00:29:43.161 [2024-07-25 13:57:39.885669] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248880, cid 6, qid 0 00:29:43.161 [2024-07-25 13:57:39.885675] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248a00, cid 7, qid 0 00:29:43.161 [2024-07-25 13:57:39.885838] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:43.161 [2024-07-25 13:57:39.885846] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:43.161 [2024-07-25 13:57:39.885851] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.885855] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f9630): datao=0, datal=8192, cccid=5 00:29:43.161 [2024-07-25 13:57:39.885861] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1248700) on tqpair(0x11f9630): expected_datao=0, payload_size=8192 00:29:43.161 [2024-07-25 13:57:39.885867] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.886099] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.886104] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.886110] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:43.161 [2024-07-25 13:57:39.886116] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:43.161 [2024-07-25 13:57:39.886120] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.886125] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f9630): datao=0, datal=512, cccid=4 00:29:43.161 [2024-07-25 13:57:39.886131] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1248580) on tqpair(0x11f9630): expected_datao=0, payload_size=512 00:29:43.161 [2024-07-25 13:57:39.886136] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.886143] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.886147] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.886154] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:43.161 [2024-07-25 13:57:39.886160] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:43.161 [2024-07-25 13:57:39.886164] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.886169] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f9630): datao=0, datal=512, cccid=6 00:29:43.161 [2024-07-25 13:57:39.886174] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1248880) on tqpair(0x11f9630): expected_datao=0, payload_size=512 00:29:43.161 [2024-07-25 13:57:39.886180] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.886187] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.886191] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.886197] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:43.161 [2024-07-25 13:57:39.886203] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:43.161 [2024-07-25 13:57:39.886208] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.886214] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f9630): datao=0, datal=4096, cccid=7 00:29:43.161 [2024-07-25 13:57:39.886220] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1248a00) on tqpair(0x11f9630): expected_datao=0, payload_size=4096 00:29:43.161 [2024-07-25 13:57:39.886226] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.886233] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.886237] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.886251] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.161 [2024-07-25 13:57:39.886258] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.161 [2024-07-25 13:57:39.886262] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.886267] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248700) on tqpair=0x11f9630 00:29:43.161 [2024-07-25 13:57:39.886280] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.161 [2024-07-25 13:57:39.886287] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.161 [2024-07-25 13:57:39.886291] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.161 [2024-07-25 13:57:39.886296] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248580) on tqpair=0x11f9630 00:29:43.161 [2024-07-25 13:57:39.886307] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.161 [2024-07-25 13:57:39.886314] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.161 [2024-07-25 13:57:39.886318] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.162 [2024-07-25 13:57:39.886323] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248880) on tqpair=0x11f9630 00:29:43.162 [2024-07-25 13:57:39.886330] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.162 [2024-07-25 13:57:39.886337] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.162 [2024-07-25 13:57:39.886341] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.162 [2024-07-25 13:57:39.886346] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248a00) on tqpair=0x11f9630 00:29:43.162 ===================================================== 00:29:43.162 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:43.162 ===================================================== 00:29:43.162 Controller Capabilities/Features 00:29:43.162 ================================ 00:29:43.162 Vendor ID: 8086 00:29:43.162 Subsystem Vendor ID: 8086 00:29:43.162 Serial Number: SPDK00000000000001 00:29:43.162 Model Number: SPDK bdev Controller 00:29:43.162 Firmware Version: 24.09 00:29:43.162 Recommended Arb Burst: 6 00:29:43.162 IEEE OUI Identifier: e4 d2 5c 00:29:43.162 Multi-path I/O 00:29:43.162 May have multiple subsystem ports: Yes 00:29:43.162 May have multiple controllers: Yes 00:29:43.162 Associated with SR-IOV VF: No 00:29:43.162 Max Data Transfer Size: 131072 00:29:43.162 Max Number of Namespaces: 32 00:29:43.162 Max Number of I/O Queues: 127 00:29:43.162 NVMe Specification Version (VS): 1.3 00:29:43.162 NVMe Specification Version (Identify): 1.3 00:29:43.162 Maximum Queue Entries: 128 00:29:43.162 Contiguous Queues Required: Yes 00:29:43.162 Arbitration Mechanisms Supported 00:29:43.162 Weighted Round Robin: Not Supported 00:29:43.162 Vendor Specific: Not Supported 00:29:43.162 Reset Timeout: 15000 ms 00:29:43.162 Doorbell Stride: 4 bytes 00:29:43.162 NVM Subsystem Reset: Not Supported 00:29:43.162 Command Sets Supported 00:29:43.162 NVM Command Set: Supported 00:29:43.162 Boot Partition: Not Supported 00:29:43.162 Memory Page Size Minimum: 4096 bytes 00:29:43.162 Memory Page Size Maximum: 4096 bytes 00:29:43.162 Persistent Memory Region: Not Supported 00:29:43.162 Optional Asynchronous Events Supported 00:29:43.162 Namespace Attribute Notices: Supported 00:29:43.162 Firmware Activation Notices: Not Supported 00:29:43.162 ANA Change Notices: Not Supported 00:29:43.162 PLE Aggregate Log Change Notices: Not Supported 00:29:43.162 LBA Status Info Alert Notices: Not Supported 00:29:43.162 EGE Aggregate Log Change Notices: Not Supported 00:29:43.162 Normal NVM Subsystem Shutdown event: Not Supported 00:29:43.162 Zone Descriptor Change Notices: Not Supported 00:29:43.162 Discovery Log Change Notices: Not Supported 00:29:43.162 Controller Attributes 00:29:43.162 128-bit Host Identifier: Supported 00:29:43.162 Non-Operational Permissive Mode: Not Supported 00:29:43.162 NVM Sets: Not Supported 00:29:43.162 Read Recovery Levels: Not Supported 00:29:43.162 Endurance Groups: Not Supported 00:29:43.162 Predictable Latency Mode: Not Supported 00:29:43.162 Traffic Based Keep ALive: Not Supported 00:29:43.162 Namespace Granularity: Not Supported 00:29:43.162 SQ Associations: Not Supported 00:29:43.162 UUID List: Not Supported 00:29:43.162 Multi-Domain Subsystem: Not Supported 00:29:43.162 Fixed Capacity Management: Not Supported 00:29:43.162 Variable Capacity Management: Not Supported 00:29:43.162 Delete Endurance Group: Not Supported 00:29:43.162 Delete NVM Set: Not Supported 00:29:43.162 Extended LBA Formats Supported: Not Supported 00:29:43.162 Flexible Data Placement Supported: Not Supported 00:29:43.162 00:29:43.162 Controller Memory Buffer Support 00:29:43.162 ================================ 00:29:43.162 Supported: No 00:29:43.162 00:29:43.162 Persistent Memory Region Support 00:29:43.162 ================================ 00:29:43.162 Supported: No 00:29:43.162 00:29:43.162 Admin Command Set Attributes 00:29:43.162 ============================ 00:29:43.162 Security Send/Receive: Not Supported 00:29:43.162 Format NVM: Not Supported 00:29:43.162 Firmware Activate/Download: Not Supported 00:29:43.162 Namespace Management: Not Supported 00:29:43.162 Device Self-Test: Not Supported 00:29:43.162 Directives: Not Supported 00:29:43.162 NVMe-MI: Not Supported 00:29:43.162 Virtualization Management: Not Supported 00:29:43.162 Doorbell Buffer Config: Not Supported 00:29:43.162 Get LBA Status Capability: Not Supported 00:29:43.162 Command & Feature Lockdown Capability: Not Supported 00:29:43.162 Abort Command Limit: 4 00:29:43.162 Async Event Request Limit: 4 00:29:43.162 Number of Firmware Slots: N/A 00:29:43.162 Firmware Slot 1 Read-Only: N/A 00:29:43.162 Firmware Activation Without Reset: N/A 00:29:43.162 Multiple Update Detection Support: N/A 00:29:43.162 Firmware Update Granularity: No Information Provided 00:29:43.162 Per-Namespace SMART Log: No 00:29:43.162 Asymmetric Namespace Access Log Page: Not Supported 00:29:43.162 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:43.162 Command Effects Log Page: Supported 00:29:43.162 Get Log Page Extended Data: Supported 00:29:43.162 Telemetry Log Pages: Not Supported 00:29:43.162 Persistent Event Log Pages: Not Supported 00:29:43.162 Supported Log Pages Log Page: May Support 00:29:43.162 Commands Supported & Effects Log Page: Not Supported 00:29:43.162 Feature Identifiers & Effects Log Page:May Support 00:29:43.162 NVMe-MI Commands & Effects Log Page: May Support 00:29:43.162 Data Area 4 for Telemetry Log: Not Supported 00:29:43.162 Error Log Page Entries Supported: 128 00:29:43.162 Keep Alive: Supported 00:29:43.162 Keep Alive Granularity: 10000 ms 00:29:43.162 00:29:43.162 NVM Command Set Attributes 00:29:43.162 ========================== 00:29:43.162 Submission Queue Entry Size 00:29:43.162 Max: 64 00:29:43.162 Min: 64 00:29:43.162 Completion Queue Entry Size 00:29:43.162 Max: 16 00:29:43.162 Min: 16 00:29:43.162 Number of Namespaces: 32 00:29:43.162 Compare Command: Supported 00:29:43.162 Write Uncorrectable Command: Not Supported 00:29:43.162 Dataset Management Command: Supported 00:29:43.162 Write Zeroes Command: Supported 00:29:43.162 Set Features Save Field: Not Supported 00:29:43.162 Reservations: Supported 00:29:43.162 Timestamp: Not Supported 00:29:43.162 Copy: Supported 00:29:43.162 Volatile Write Cache: Present 00:29:43.162 Atomic Write Unit (Normal): 1 00:29:43.162 Atomic Write Unit (PFail): 1 00:29:43.162 Atomic Compare & Write Unit: 1 00:29:43.162 Fused Compare & Write: Supported 00:29:43.162 Scatter-Gather List 00:29:43.162 SGL Command Set: Supported 00:29:43.162 SGL Keyed: Supported 00:29:43.162 SGL Bit Bucket Descriptor: Not Supported 00:29:43.162 SGL Metadata Pointer: Not Supported 00:29:43.162 Oversized SGL: Not Supported 00:29:43.162 SGL Metadata Address: Not Supported 00:29:43.162 SGL Offset: Supported 00:29:43.162 Transport SGL Data Block: Not Supported 00:29:43.162 Replay Protected Memory Block: Not Supported 00:29:43.162 00:29:43.162 Firmware Slot Information 00:29:43.162 ========================= 00:29:43.162 Active slot: 1 00:29:43.162 Slot 1 Firmware Revision: 24.09 00:29:43.162 00:29:43.162 00:29:43.162 Commands Supported and Effects 00:29:43.162 ============================== 00:29:43.162 Admin Commands 00:29:43.162 -------------- 00:29:43.162 Get Log Page (02h): Supported 00:29:43.162 Identify (06h): Supported 00:29:43.162 Abort (08h): Supported 00:29:43.162 Set Features (09h): Supported 00:29:43.162 Get Features (0Ah): Supported 00:29:43.162 Asynchronous Event Request (0Ch): Supported 00:29:43.162 Keep Alive (18h): Supported 00:29:43.162 I/O Commands 00:29:43.162 ------------ 00:29:43.162 Flush (00h): Supported LBA-Change 00:29:43.162 Write (01h): Supported LBA-Change 00:29:43.162 Read (02h): Supported 00:29:43.162 Compare (05h): Supported 00:29:43.162 Write Zeroes (08h): Supported LBA-Change 00:29:43.162 Dataset Management (09h): Supported LBA-Change 00:29:43.162 Copy (19h): Supported LBA-Change 00:29:43.162 00:29:43.162 Error Log 00:29:43.162 ========= 00:29:43.162 00:29:43.162 Arbitration 00:29:43.162 =========== 00:29:43.162 Arbitration Burst: 1 00:29:43.162 00:29:43.162 Power Management 00:29:43.162 ================ 00:29:43.162 Number of Power States: 1 00:29:43.162 Current Power State: Power State #0 00:29:43.163 Power State #0: 00:29:43.163 Max Power: 0.00 W 00:29:43.163 Non-Operational State: Operational 00:29:43.163 Entry Latency: Not Reported 00:29:43.163 Exit Latency: Not Reported 00:29:43.163 Relative Read Throughput: 0 00:29:43.163 Relative Read Latency: 0 00:29:43.163 Relative Write Throughput: 0 00:29:43.163 Relative Write Latency: 0 00:29:43.163 Idle Power: Not Reported 00:29:43.163 Active Power: Not Reported 00:29:43.163 Non-Operational Permissive Mode: Not Supported 00:29:43.163 00:29:43.163 Health Information 00:29:43.163 ================== 00:29:43.163 Critical Warnings: 00:29:43.163 Available Spare Space: OK 00:29:43.163 Temperature: OK 00:29:43.163 Device Reliability: OK 00:29:43.163 Read Only: No 00:29:43.163 Volatile Memory Backup: OK 00:29:43.163 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:43.163 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:43.163 Available Spare: 0% 00:29:43.163 Available Spare Threshold: 0% 00:29:43.163 Life Percentage Used:[2024-07-25 13:57:39.886435] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.163 [2024-07-25 13:57:39.886441] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x11f9630) 00:29:43.163 [2024-07-25 13:57:39.886448] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.163 [2024-07-25 13:57:39.886462] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248a00, cid 7, qid 0 00:29:43.163 [2024-07-25 13:57:39.886564] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.163 [2024-07-25 13:57:39.886571] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.163 [2024-07-25 13:57:39.886576] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.163 [2024-07-25 13:57:39.886580] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248a00) on tqpair=0x11f9630 00:29:43.163 [2024-07-25 13:57:39.886612] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:29:43.163 [2024-07-25 13:57:39.886622] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1247f80) on tqpair=0x11f9630 00:29:43.163 [2024-07-25 13:57:39.886629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.163 [2024-07-25 13:57:39.886635] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248100) on tqpair=0x11f9630 00:29:43.163 [2024-07-25 13:57:39.886641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.163 [2024-07-25 13:57:39.886647] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248280) on tqpair=0x11f9630 00:29:43.163 [2024-07-25 13:57:39.886652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.163 [2024-07-25 13:57:39.886660] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248400) on tqpair=0x11f9630 00:29:43.163 [2024-07-25 13:57:39.886666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.163 [2024-07-25 13:57:39.886675] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.163 [2024-07-25 13:57:39.886679] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.163 [2024-07-25 13:57:39.886684] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f9630) 00:29:43.163 [2024-07-25 13:57:39.886691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.163 [2024-07-25 13:57:39.886704] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248400, cid 3, qid 0 00:29:43.163 [2024-07-25 13:57:39.886796] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.163 [2024-07-25 13:57:39.886803] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.163 [2024-07-25 13:57:39.886808] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.163 [2024-07-25 13:57:39.886812] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248400) on tqpair=0x11f9630 00:29:43.163 [2024-07-25 13:57:39.886820] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.163 [2024-07-25 13:57:39.886824] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.163 [2024-07-25 13:57:39.886829] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f9630) 00:29:43.163 [2024-07-25 13:57:39.886836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.163 [2024-07-25 13:57:39.886851] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248400, cid 3, qid 0 00:29:43.163 [2024-07-25 13:57:39.886958] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.163 [2024-07-25 13:57:39.886965] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.163 [2024-07-25 13:57:39.886969] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.163 [2024-07-25 13:57:39.886974] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248400) on tqpair=0x11f9630 00:29:43.163 [2024-07-25 13:57:39.886979] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:29:43.163 [2024-07-25 13:57:39.886985] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:29:43.163 [2024-07-25 13:57:39.886996] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.163 [2024-07-25 13:57:39.887001] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.163 [2024-07-25 13:57:39.887006] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f9630) 00:29:43.163 [2024-07-25 13:57:39.887013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.163 [2024-07-25 13:57:39.887024] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248400, cid 3, qid 0 00:29:43.163 [2024-07-25 13:57:39.887187] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.163 [2024-07-25 13:57:39.887193] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.163 [2024-07-25 13:57:39.887198] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.163 [2024-07-25 13:57:39.887202] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248400) on tqpair=0x11f9630 00:29:43.163 [2024-07-25 13:57:39.887213] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.163 [2024-07-25 13:57:39.887218] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.163 [2024-07-25 13:57:39.887223] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f9630) 00:29:43.163 [2024-07-25 13:57:39.887230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.163 [2024-07-25 13:57:39.887244] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248400, cid 3, qid 0 00:29:43.163 [2024-07-25 13:57:39.887337] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.163 [2024-07-25 13:57:39.887343] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.163 [2024-07-25 13:57:39.887348] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.163 [2024-07-25 13:57:39.887353] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248400) on tqpair=0x11f9630 00:29:43.163 [2024-07-25 13:57:39.887363] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.163 [2024-07-25 13:57:39.887368] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.163 [2024-07-25 13:57:39.887372] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f9630) 00:29:43.163 [2024-07-25 13:57:39.887379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.163 [2024-07-25 13:57:39.887391] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248400, cid 3, qid 0 00:29:43.163 [2024-07-25 13:57:39.887479] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.163 [2024-07-25 13:57:39.887485] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.163 [2024-07-25 13:57:39.887490] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.163 [2024-07-25 13:57:39.887494] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248400) on tqpair=0x11f9630 00:29:43.163 [2024-07-25 13:57:39.887505] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.163 [2024-07-25 13:57:39.887509] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.163 [2024-07-25 13:57:39.887514] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f9630) 00:29:43.164 [2024-07-25 13:57:39.887521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.164 [2024-07-25 13:57:39.887532] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248400, cid 3, qid 0 00:29:43.164 [2024-07-25 13:57:39.887620] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.164 [2024-07-25 13:57:39.887627] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.164 [2024-07-25 13:57:39.887631] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.887636] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248400) on tqpair=0x11f9630 00:29:43.164 [2024-07-25 13:57:39.887646] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.887651] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.887655] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f9630) 00:29:43.164 [2024-07-25 13:57:39.887662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.164 [2024-07-25 13:57:39.887673] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248400, cid 3, qid 0 00:29:43.164 [2024-07-25 13:57:39.887774] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.164 [2024-07-25 13:57:39.887781] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.164 [2024-07-25 13:57:39.887786] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.887790] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248400) on tqpair=0x11f9630 00:29:43.164 [2024-07-25 13:57:39.887801] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.887806] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.887810] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f9630) 00:29:43.164 [2024-07-25 13:57:39.887817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.164 [2024-07-25 13:57:39.887829] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248400, cid 3, qid 0 00:29:43.164 [2024-07-25 13:57:39.887918] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.164 [2024-07-25 13:57:39.887925] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.164 [2024-07-25 13:57:39.887929] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.887934] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248400) on tqpair=0x11f9630 00:29:43.164 [2024-07-25 13:57:39.887943] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.887948] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.887952] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f9630) 00:29:43.164 [2024-07-25 13:57:39.887959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.164 [2024-07-25 13:57:39.887970] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248400, cid 3, qid 0 00:29:43.164 [2024-07-25 13:57:39.888063] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.164 [2024-07-25 13:57:39.888069] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.164 [2024-07-25 13:57:39.888074] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.888078] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248400) on tqpair=0x11f9630 00:29:43.164 [2024-07-25 13:57:39.888089] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.888093] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.888098] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f9630) 00:29:43.164 [2024-07-25 13:57:39.888105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.164 [2024-07-25 13:57:39.888116] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248400, cid 3, qid 0 00:29:43.164 [2024-07-25 13:57:39.888202] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.164 [2024-07-25 13:57:39.888208] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.164 [2024-07-25 13:57:39.888213] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.888218] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248400) on tqpair=0x11f9630 00:29:43.164 [2024-07-25 13:57:39.888228] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.888233] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.888237] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f9630) 00:29:43.164 [2024-07-25 13:57:39.888244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.164 [2024-07-25 13:57:39.888255] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248400, cid 3, qid 0 00:29:43.164 [2024-07-25 13:57:39.888344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.164 [2024-07-25 13:57:39.888351] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.164 [2024-07-25 13:57:39.888356] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.888360] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248400) on tqpair=0x11f9630 00:29:43.164 [2024-07-25 13:57:39.888369] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.888374] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.888379] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f9630) 00:29:43.164 [2024-07-25 13:57:39.888386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.164 [2024-07-25 13:57:39.888397] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248400, cid 3, qid 0 00:29:43.164 [2024-07-25 13:57:39.888483] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.164 [2024-07-25 13:57:39.888491] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.164 [2024-07-25 13:57:39.888496] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.888500] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248400) on tqpair=0x11f9630 00:29:43.164 [2024-07-25 13:57:39.888511] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.888515] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.888520] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f9630) 00:29:43.164 [2024-07-25 13:57:39.888527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.164 [2024-07-25 13:57:39.888538] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248400, cid 3, qid 0 00:29:43.164 [2024-07-25 13:57:39.888626] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.164 [2024-07-25 13:57:39.888633] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.164 [2024-07-25 13:57:39.888637] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.888642] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248400) on tqpair=0x11f9630 00:29:43.164 [2024-07-25 13:57:39.888653] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.888657] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.888662] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f9630) 00:29:43.164 [2024-07-25 13:57:39.888669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.164 [2024-07-25 13:57:39.888680] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248400, cid 3, qid 0 00:29:43.164 [2024-07-25 13:57:39.892725] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.164 [2024-07-25 13:57:39.892736] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.164 [2024-07-25 13:57:39.892740] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.892745] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248400) on tqpair=0x11f9630 00:29:43.164 [2024-07-25 13:57:39.892756] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.892761] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.892766] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f9630) 00:29:43.164 [2024-07-25 13:57:39.892773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.164 [2024-07-25 13:57:39.892786] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1248400, cid 3, qid 0 00:29:43.164 [2024-07-25 13:57:39.892876] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.164 [2024-07-25 13:57:39.892883] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.164 [2024-07-25 13:57:39.892888] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.164 [2024-07-25 13:57:39.892892] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1248400) on tqpair=0x11f9630 00:29:43.164 [2024-07-25 13:57:39.892901] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:29:43.164 0% 00:29:43.164 Data Units Read: 0 00:29:43.164 Data Units Written: 0 00:29:43.164 Host Read Commands: 0 00:29:43.164 Host Write Commands: 0 00:29:43.164 Controller Busy Time: 0 minutes 00:29:43.164 Power Cycles: 0 00:29:43.164 Power On Hours: 0 hours 00:29:43.164 Unsafe Shutdowns: 0 00:29:43.164 Unrecoverable Media Errors: 0 00:29:43.164 Lifetime Error Log Entries: 0 00:29:43.164 Warning Temperature Time: 0 minutes 00:29:43.164 Critical Temperature Time: 0 minutes 00:29:43.164 00:29:43.164 Number of Queues 00:29:43.164 ================ 00:29:43.164 Number of I/O Submission Queues: 127 00:29:43.164 Number of I/O Completion Queues: 127 00:29:43.164 00:29:43.164 Active Namespaces 00:29:43.164 ================= 00:29:43.164 Namespace ID:1 00:29:43.164 Error Recovery Timeout: Unlimited 00:29:43.164 Command Set Identifier: NVM (00h) 00:29:43.164 Deallocate: Supported 00:29:43.164 Deallocated/Unwritten Error: Not Supported 00:29:43.164 Deallocated Read Value: Unknown 00:29:43.164 Deallocate in Write Zeroes: Not Supported 00:29:43.164 Deallocated Guard Field: 0xFFFF 00:29:43.164 Flush: Supported 00:29:43.164 Reservation: Supported 00:29:43.164 Namespace Sharing Capabilities: Multiple Controllers 00:29:43.165 Size (in LBAs): 131072 (0GiB) 00:29:43.165 Capacity (in LBAs): 131072 (0GiB) 00:29:43.165 Utilization (in LBAs): 131072 (0GiB) 00:29:43.165 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:43.165 EUI64: ABCDEF0123456789 00:29:43.165 UUID: 95053cc9-f6b7-4ea6-bead-048829938d58 00:29:43.165 Thin Provisioning: Not Supported 00:29:43.165 Per-NS Atomic Units: Yes 00:29:43.165 Atomic Boundary Size (Normal): 0 00:29:43.165 Atomic Boundary Size (PFail): 0 00:29:43.165 Atomic Boundary Offset: 0 00:29:43.165 Maximum Single Source Range Length: 65535 00:29:43.165 Maximum Copy Length: 65535 00:29:43.165 Maximum Source Range Count: 1 00:29:43.165 NGUID/EUI64 Never Reused: No 00:29:43.165 Namespace Write Protected: No 00:29:43.165 Number of LBA Formats: 1 00:29:43.165 Current LBA Format: LBA Format #00 00:29:43.165 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:43.165 00:29:43.165 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:43.165 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:43.165 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.165 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:43.165 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.165 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:43.165 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:43.165 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:43.165 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:29:43.165 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:43.165 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:29:43.165 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:43.165 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:43.165 rmmod nvme_tcp 00:29:43.165 rmmod nvme_fabrics 00:29:43.165 rmmod nvme_keyring 00:29:43.165 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:43.165 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:29:43.165 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:29:43.165 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 419250 ']' 00:29:43.165 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 419250 00:29:43.165 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 419250 ']' 00:29:43.165 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 419250 00:29:43.165 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:29:43.165 13:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:43.165 13:57:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 419250 00:29:43.424 13:57:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:43.424 13:57:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:43.424 13:57:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 419250' 00:29:43.424 killing process with pid 419250 00:29:43.424 13:57:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 419250 00:29:43.424 13:57:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 419250 00:29:43.424 13:57:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:43.424 13:57:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:43.424 13:57:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:43.424 13:57:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:43.425 13:57:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:43.425 13:57:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.425 13:57:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:43.425 13:57:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:45.960 00:29:45.960 real 0m10.611s 00:29:45.960 user 0m7.897s 00:29:45.960 sys 0m5.607s 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:45.960 ************************************ 00:29:45.960 END TEST nvmf_identify 00:29:45.960 ************************************ 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.960 ************************************ 00:29:45.960 START TEST nvmf_perf 00:29:45.960 ************************************ 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:45.960 * Looking for test storage... 00:29:45.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:45.960 13:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:52.567 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:52.567 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:52.567 Found net devices under 0000:af:00.0: cvl_0_0 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:52.567 Found net devices under 0000:af:00.1: cvl_0_1 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:52.567 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:52.568 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:52.568 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:52.568 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:52.568 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:52.568 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:52.568 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:52.568 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:52.568 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:52.568 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:52.568 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:52.568 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:52.568 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:52.568 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:52.568 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:52.568 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:52.568 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:52.568 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:52.568 13:57:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:52.568 13:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:52.568 13:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:52.568 13:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:52.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:52.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:29:52.568 00:29:52.568 --- 10.0.0.2 ping statistics --- 00:29:52.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.568 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:29:52.568 13:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:52.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:52.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:29:52.568 00:29:52.568 --- 10.0.0.1 ping statistics --- 00:29:52.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.568 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:29:52.568 13:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:52.568 13:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:29:52.568 13:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:52.568 13:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:52.568 13:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:52.568 13:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:52.568 13:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:52.568 13:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:52.568 13:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:52.568 13:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:52.568 13:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:52.568 13:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:52.568 13:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:52.568 13:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:52.568 13:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=422968 00:29:52.568 13:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 422968 00:29:52.568 13:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 422968 ']' 00:29:52.568 13:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.568 13:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:52.568 13:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.568 13:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:52.568 13:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:52.568 [2024-07-25 13:57:49.219808] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:29:52.568 [2024-07-25 13:57:49.219863] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:52.568 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.568 [2024-07-25 13:57:49.265377] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:52.568 [2024-07-25 13:57:49.298898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:52.568 [2024-07-25 13:57:49.339261] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.568 [2024-07-25 13:57:49.339299] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.568 [2024-07-25 13:57:49.339309] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:52.568 [2024-07-25 13:57:49.339318] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:52.568 [2024-07-25 13:57:49.339325] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.568 [2024-07-25 13:57:49.339369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.568 [2024-07-25 13:57:49.339474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:52.568 [2024-07-25 13:57:49.339556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:52.568 [2024-07-25 13:57:49.339557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.135 13:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:53.135 13:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:29:53.135 13:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:53.135 13:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:53.135 13:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:53.393 13:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:53.393 13:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:53.393 13:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:56.683 13:57:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:56.683 13:57:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:56.683 13:57:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:29:56.684 13:57:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:56.684 13:57:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:56.684 13:57:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:29:56.684 13:57:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:56.684 13:57:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:56.684 13:57:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:56.943 [2024-07-25 13:57:53.644996] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:56.943 13:57:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:57.202 13:57:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:57.202 13:57:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:57.202 13:57:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:57.202 13:57:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:57.460 13:57:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:57.718 [2024-07-25 13:57:54.381016] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:57.718 13:57:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:57.718 13:57:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:29:57.718 13:57:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:29:57.718 13:57:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:57.718 13:57:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:29:59.096 Initializing NVMe Controllers 00:29:59.096 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:29:59.096 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:29:59.096 Initialization complete. Launching workers. 00:29:59.096 ======================================================== 00:29:59.096 Latency(us) 00:29:59.096 Device Information : IOPS MiB/s Average min max 00:29:59.096 PCIE (0000:d8:00.0) NSID 1 from core 0: 101755.74 397.48 314.05 34.23 8184.09 00:29:59.096 ======================================================== 00:29:59.096 Total : 101755.74 397.48 314.05 34.23 8184.09 00:29:59.096 00:29:59.096 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:59.096 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.473 Initializing NVMe Controllers 00:30:00.473 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:00.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:00.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:00.473 Initialization complete. Launching workers. 00:30:00.473 ======================================================== 00:30:00.473 Latency(us) 00:30:00.473 Device Information : IOPS MiB/s Average min max 00:30:00.473 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 94.00 0.37 10815.71 133.14 44717.66 00:30:00.473 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 43.00 0.17 23755.34 7967.51 47890.68 00:30:00.473 ======================================================== 00:30:00.473 Total : 137.00 0.54 14877.05 133.14 47890.68 00:30:00.473 00:30:00.473 13:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:00.473 EAL: No free 2048 kB hugepages reported on node 1 00:30:01.851 Initializing NVMe Controllers 00:30:01.851 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:01.851 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:01.851 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:01.851 Initialization complete. Launching workers. 00:30:01.851 ======================================================== 00:30:01.851 Latency(us) 00:30:01.851 Device Information : IOPS MiB/s Average min max 00:30:01.851 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10034.30 39.20 3191.17 599.88 8670.50 00:30:01.851 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3820.88 14.93 8387.88 6324.95 16240.51 00:30:01.851 ======================================================== 00:30:01.851 Total : 13855.19 54.12 4624.29 599.88 16240.51 00:30:01.851 00:30:01.851 13:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:01.851 13:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:01.851 13:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:01.851 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.386 Initializing NVMe Controllers 00:30:04.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:04.386 Controller IO queue size 128, less than required. 00:30:04.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:04.386 Controller IO queue size 128, less than required. 00:30:04.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:04.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:04.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:04.386 Initialization complete. Launching workers. 00:30:04.386 ======================================================== 00:30:04.386 Latency(us) 00:30:04.386 Device Information : IOPS MiB/s Average min max 00:30:04.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 958.29 239.57 137684.93 84555.93 181071.75 00:30:04.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 573.58 143.39 234658.35 55089.79 342189.40 00:30:04.386 ======================================================== 00:30:04.386 Total : 1531.87 382.97 173994.62 55089.79 342189.40 00:30:04.386 00:30:04.386 13:58:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:04.386 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.387 No valid NVMe controllers or AIO or URING devices found 00:30:04.387 Initializing NVMe Controllers 00:30:04.387 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:04.387 Controller IO queue size 128, less than required. 00:30:04.387 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:04.387 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:04.387 Controller IO queue size 128, less than required. 00:30:04.387 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:04.387 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:04.387 WARNING: Some requested NVMe devices were skipped 00:30:04.387 13:58:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:04.387 EAL: No free 2048 kB hugepages reported on node 1 00:30:06.922 Initializing NVMe Controllers 00:30:06.922 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:06.922 Controller IO queue size 128, less than required. 00:30:06.922 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:06.922 Controller IO queue size 128, less than required. 00:30:06.922 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:06.922 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:06.922 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:06.922 Initialization complete. Launching workers. 00:30:06.922 00:30:06.922 ==================== 00:30:06.922 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:06.922 TCP transport: 00:30:06.922 polls: 38140 00:30:06.922 idle_polls: 10831 00:30:06.922 sock_completions: 27309 00:30:06.922 nvme_completions: 4185 00:30:06.922 submitted_requests: 6296 00:30:06.922 queued_requests: 1 00:30:06.922 00:30:06.922 ==================== 00:30:06.922 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:06.922 TCP transport: 00:30:06.922 polls: 40828 00:30:06.922 idle_polls: 13067 00:30:06.922 sock_completions: 27761 00:30:06.922 nvme_completions: 4203 00:30:06.922 submitted_requests: 6316 00:30:06.922 queued_requests: 1 00:30:06.922 ======================================================== 00:30:06.922 Latency(us) 00:30:06.922 Device Information : IOPS MiB/s Average min max 00:30:06.922 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1044.37 261.09 125983.53 66577.35 173443.00 00:30:06.922 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1048.86 262.22 125521.14 53125.39 183090.92 00:30:06.922 ======================================================== 00:30:06.922 Total : 2093.23 523.31 125751.84 53125.39 183090.92 00:30:06.922 00:30:06.922 13:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:06.922 13:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:07.181 13:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:07.181 13:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']' 00:30:07.181 13:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:12.455 13:58:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=603e83c2-8f8a-4f60-a1c2-34502ccbb5f8 00:30:12.455 13:58:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 603e83c2-8f8a-4f60-a1c2-34502ccbb5f8 00:30:12.455 13:58:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=603e83c2-8f8a-4f60-a1c2-34502ccbb5f8 00:30:12.455 13:58:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:12.455 13:58:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:30:12.455 13:58:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:30:12.455 13:58:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:12.455 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:12.455 { 00:30:12.455 "uuid": "603e83c2-8f8a-4f60-a1c2-34502ccbb5f8", 00:30:12.455 "name": "lvs_0", 00:30:12.455 "base_bdev": "Nvme0n1", 00:30:12.455 "total_data_clusters": 381173, 00:30:12.455 "free_clusters": 381173, 00:30:12.455 "block_size": 512, 00:30:12.455 "cluster_size": 4194304 00:30:12.455 } 00:30:12.455 ]' 00:30:12.455 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="603e83c2-8f8a-4f60-a1c2-34502ccbb5f8") .free_clusters' 00:30:12.455 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=381173 00:30:12.455 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="603e83c2-8f8a-4f60-a1c2-34502ccbb5f8") .cluster_size' 00:30:12.455 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:12.455 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=1524692 00:30:12.455 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 1524692 00:30:12.455 1524692 00:30:12.455 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1524692 -gt 20480 ']' 00:30:12.455 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:12.455 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 603e83c2-8f8a-4f60-a1c2-34502ccbb5f8 lbd_0 20480 00:30:13.059 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=6e8dea83-dcc3-43ff-bff1-b7993a647e20 00:30:13.059 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 6e8dea83-dcc3-43ff-bff1-b7993a647e20 lvs_n_0 00:30:13.996 13:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=bf7d6214-ac30-4c8b-989d-5589f830fb2c 00:30:13.996 13:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb bf7d6214-ac30-4c8b-989d-5589f830fb2c 00:30:13.996 13:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=bf7d6214-ac30-4c8b-989d-5589f830fb2c 00:30:13.996 13:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:13.996 13:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:30:13.996 13:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:30:13.996 13:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:14.256 13:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:14.256 { 00:30:14.256 "uuid": "603e83c2-8f8a-4f60-a1c2-34502ccbb5f8", 00:30:14.256 "name": "lvs_0", 00:30:14.256 "base_bdev": "Nvme0n1", 00:30:14.256 "total_data_clusters": 381173, 00:30:14.256 "free_clusters": 376053, 00:30:14.256 "block_size": 512, 00:30:14.256 "cluster_size": 4194304 00:30:14.256 }, 00:30:14.256 { 00:30:14.256 "uuid": "bf7d6214-ac30-4c8b-989d-5589f830fb2c", 00:30:14.256 "name": "lvs_n_0", 00:30:14.256 "base_bdev": "6e8dea83-dcc3-43ff-bff1-b7993a647e20", 00:30:14.256 "total_data_clusters": 5114, 00:30:14.256 "free_clusters": 5114, 00:30:14.256 "block_size": 512, 00:30:14.256 "cluster_size": 4194304 00:30:14.256 } 00:30:14.256 ]' 00:30:14.256 13:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="bf7d6214-ac30-4c8b-989d-5589f830fb2c") .free_clusters' 00:30:14.256 13:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:30:14.256 13:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="bf7d6214-ac30-4c8b-989d-5589f830fb2c") .cluster_size' 00:30:14.256 13:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:14.256 13:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:30:14.256 13:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:30:14.256 20456 00:30:14.256 13:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:14.256 13:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bf7d6214-ac30-4c8b-989d-5589f830fb2c lbd_nest_0 20456 00:30:14.515 13:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=c1cc5dee-d3cd-4706-86e2-c98b4874e629 00:30:14.515 13:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:14.515 13:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:14.515 13:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 c1cc5dee-d3cd-4706-86e2-c98b4874e629 00:30:14.774 13:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:15.034 13:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:15.034 13:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:15.034 13:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:15.034 13:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:15.034 13:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:15.034 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.248 Initializing NVMe Controllers 00:30:27.248 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:27.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:27.248 Initialization complete. Launching workers. 00:30:27.248 ======================================================== 00:30:27.248 Latency(us) 00:30:27.248 Device Information : IOPS MiB/s Average min max 00:30:27.248 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.39 0.02 21628.73 222.70 45601.15 00:30:27.248 ======================================================== 00:30:27.248 Total : 46.39 0.02 21628.73 222.70 45601.15 00:30:27.248 00:30:27.248 13:58:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:27.248 13:58:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:27.248 EAL: No free 2048 kB hugepages reported on node 1 00:30:37.229 Initializing NVMe Controllers 00:30:37.229 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:37.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:37.229 Initialization complete. Launching workers. 00:30:37.229 ======================================================== 00:30:37.229 Latency(us) 00:30:37.229 Device Information : IOPS MiB/s Average min max 00:30:37.229 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 76.70 9.59 13044.85 4036.09 50876.47 00:30:37.229 ======================================================== 00:30:37.229 Total : 76.70 9.59 13044.85 4036.09 50876.47 00:30:37.229 00:30:37.229 13:58:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:37.229 13:58:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:37.229 13:58:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:37.229 EAL: No free 2048 kB hugepages reported on node 1 00:30:47.208 Initializing NVMe Controllers 00:30:47.208 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:47.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:47.208 Initialization complete. Launching workers. 00:30:47.208 ======================================================== 00:30:47.208 Latency(us) 00:30:47.208 Device Information : IOPS MiB/s Average min max 00:30:47.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9071.34 4.43 3527.40 288.31 8551.13 00:30:47.208 ======================================================== 00:30:47.208 Total : 9071.34 4.43 3527.40 288.31 8551.13 00:30:47.208 00:30:47.208 13:58:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:47.208 13:58:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:47.208 EAL: No free 2048 kB hugepages reported on node 1 00:30:57.255 Initializing NVMe Controllers 00:30:57.255 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:57.255 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:57.255 Initialization complete. Launching workers. 00:30:57.255 ======================================================== 00:30:57.255 Latency(us) 00:30:57.255 Device Information : IOPS MiB/s Average min max 00:30:57.255 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1876.60 234.57 17087.33 1237.41 44846.71 00:30:57.255 ======================================================== 00:30:57.255 Total : 1876.60 234.57 17087.33 1237.41 44846.71 00:30:57.255 00:30:57.255 13:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:57.255 13:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:57.255 13:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:57.255 EAL: No free 2048 kB hugepages reported on node 1 00:31:07.236 Initializing NVMe Controllers 00:31:07.236 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:07.236 Controller IO queue size 128, less than required. 00:31:07.236 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:07.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:07.236 Initialization complete. Launching workers. 00:31:07.236 ======================================================== 00:31:07.236 Latency(us) 00:31:07.236 Device Information : IOPS MiB/s Average min max 00:31:07.236 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15841.41 7.74 8080.15 1304.57 16409.25 00:31:07.236 ======================================================== 00:31:07.236 Total : 15841.41 7.74 8080.15 1304.57 16409.25 00:31:07.236 00:31:07.236 13:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:07.236 13:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:07.236 EAL: No free 2048 kB hugepages reported on node 1 00:31:17.222 Initializing NVMe Controllers 00:31:17.222 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:17.222 Controller IO queue size 128, less than required. 00:31:17.222 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:17.222 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:17.222 Initialization complete. Launching workers. 00:31:17.222 ======================================================== 00:31:17.222 Latency(us) 00:31:17.222 Device Information : IOPS MiB/s Average min max 00:31:17.222 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1182.28 147.78 108531.47 15997.45 267087.79 00:31:17.222 ======================================================== 00:31:17.222 Total : 1182.28 147.78 108531.47 15997.45 267087.79 00:31:17.222 00:31:17.222 13:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:17.222 13:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c1cc5dee-d3cd-4706-86e2-c98b4874e629 00:31:17.791 13:59:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:17.791 13:59:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6e8dea83-dcc3-43ff-bff1-b7993a647e20 00:31:18.050 13:59:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:18.310 13:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:18.310 13:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:18.310 13:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:18.310 13:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:31:18.310 13:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:18.310 13:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:31:18.310 13:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:18.310 13:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:18.310 rmmod nvme_tcp 00:31:18.310 rmmod nvme_fabrics 00:31:18.310 rmmod nvme_keyring 00:31:18.310 13:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:18.310 13:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:31:18.310 13:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:31:18.310 13:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 422968 ']' 00:31:18.310 13:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 422968 00:31:18.310 13:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 422968 ']' 00:31:18.310 13:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 422968 00:31:18.310 13:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:31:18.310 13:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:18.310 13:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 422968 00:31:18.570 13:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:18.570 13:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:18.570 13:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 422968' 00:31:18.570 killing process with pid 422968 00:31:18.570 13:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 422968 00:31:18.570 13:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 422968 00:31:20.476 13:59:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:20.476 13:59:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:20.476 13:59:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:20.476 13:59:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:20.476 13:59:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:20.476 13:59:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.476 13:59:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:20.476 13:59:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:23.013 00:31:23.013 real 1m36.938s 00:31:23.013 user 5m42.119s 00:31:23.013 sys 0m19.898s 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:23.013 ************************************ 00:31:23.013 END TEST nvmf_perf 00:31:23.013 ************************************ 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.013 ************************************ 00:31:23.013 START TEST nvmf_fio_host 00:31:23.013 ************************************ 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:23.013 * Looking for test storage... 00:31:23.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.013 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:23.014 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.014 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:31:23.014 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:23.014 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:23.014 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:23.014 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:23.014 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:23.014 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:23.014 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:23.014 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:23.014 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:23.014 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:23.014 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:23.014 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:23.014 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:23.014 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:23.014 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:23.014 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:23.014 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:23.014 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.014 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:23.014 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:23.014 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:31:23.014 13:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:29.677 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:29.677 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:29.677 Found net devices under 0000:af:00.0: cvl_0_0 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:29.677 Found net devices under 0000:af:00.1: cvl_0_1 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:31:29.677 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:29.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:29.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:31:29.678 00:31:29.678 --- 10.0.0.2 ping statistics --- 00:31:29.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:29.678 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:29.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:29.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:31:29.678 00:31:29.678 --- 10.0.0.1 ping statistics --- 00:31:29.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:29.678 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=440902 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 440902 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 440902 ']' 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:29.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:29.678 13:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.678 [2024-07-25 13:59:26.496926] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:31:29.678 [2024-07-25 13:59:26.496974] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:29.678 EAL: No free 2048 kB hugepages reported on node 1 00:31:29.678 [2024-07-25 13:59:26.537034] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:29.937 [2024-07-25 13:59:26.572778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:29.937 [2024-07-25 13:59:26.613275] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:29.937 [2024-07-25 13:59:26.613316] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:29.937 [2024-07-25 13:59:26.613326] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:29.937 [2024-07-25 13:59:26.613334] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:29.937 [2024-07-25 13:59:26.613341] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:29.937 [2024-07-25 13:59:26.613385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:29.937 [2024-07-25 13:59:26.613482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:29.937 [2024-07-25 13:59:26.613574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:29.937 [2024-07-25 13:59:26.613576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.504 13:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:30.504 13:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:31:30.504 13:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:30.763 [2024-07-25 13:59:27.460393] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:30.763 13:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:30.763 13:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:30.763 13:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.763 13:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:31.021 Malloc1 00:31:31.021 13:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:31.280 13:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:31.280 13:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:31.539 [2024-07-25 13:59:28.241424] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:31.539 13:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:31.798 13:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:31.798 13:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:31.798 13:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:31.798 13:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:31.798 13:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:31.798 13:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:31.798 13:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:31.798 13:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:31.798 13:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:31.798 13:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:31.798 13:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:31.798 13:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:31.798 13:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:31.798 13:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:31.798 13:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:31.798 13:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:31.798 13:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:31.798 13:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:31.798 13:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:31.798 13:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:31.798 13:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:31.798 13:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:31.798 13:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:32.057 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:32.057 fio-3.35 00:31:32.057 Starting 1 thread 00:31:32.057 EAL: No free 2048 kB hugepages reported on node 1 00:31:34.590 00:31:34.590 test: (groupid=0, jobs=1): err= 0: pid=441560: Thu Jul 25 13:59:31 2024 00:31:34.590 read: IOPS=12.3k, BW=48.0MiB/s (50.4MB/s)(96.3MiB/2005msec) 00:31:34.590 slat (nsec): min=1516, max=195318, avg=1635.95, stdev=1760.49 00:31:34.590 clat (usec): min=3574, max=9666, avg=5768.56, stdev=451.84 00:31:34.590 lat (usec): min=3582, max=9668, avg=5770.20, stdev=451.84 00:31:34.590 clat percentiles (usec): 00:31:34.590 | 1.00th=[ 4686], 5.00th=[ 5080], 10.00th=[ 5211], 20.00th=[ 5407], 00:31:34.590 | 30.00th=[ 5538], 40.00th=[ 5669], 50.00th=[ 5735], 60.00th=[ 5866], 00:31:34.590 | 70.00th=[ 5997], 80.00th=[ 6128], 90.00th=[ 6259], 95.00th=[ 6456], 00:31:34.590 | 99.00th=[ 6980], 99.50th=[ 7373], 99.90th=[ 8586], 99.95th=[ 9372], 00:31:34.590 | 99.99th=[ 9634] 00:31:34.590 bw ( KiB/s): min=47752, max=49896, per=99.89%, avg=49124.00, stdev=958.94, samples=4 00:31:34.590 iops : min=11938, max=12474, avg=12281.00, stdev=239.74, samples=4 00:31:34.590 write: IOPS=12.3k, BW=47.9MiB/s (50.2MB/s)(96.0MiB/2005msec); 0 zone resets 00:31:34.590 slat (nsec): min=1552, max=178727, avg=1714.07, stdev=1290.60 00:31:34.590 clat (usec): min=1932, max=8636, avg=4586.70, stdev=371.24 00:31:34.590 lat (usec): min=1944, max=8638, avg=4588.41, stdev=371.18 00:31:34.590 clat percentiles (usec): 00:31:34.590 | 1.00th=[ 3589], 5.00th=[ 3982], 10.00th=[ 4146], 20.00th=[ 4293], 00:31:34.590 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4621], 60.00th=[ 4686], 00:31:34.590 | 70.00th=[ 4752], 80.00th=[ 4883], 90.00th=[ 5014], 95.00th=[ 5145], 00:31:34.590 | 99.00th=[ 5407], 99.50th=[ 5473], 99.90th=[ 7111], 99.95th=[ 7898], 00:31:34.590 | 99.99th=[ 8586] 00:31:34.590 bw ( KiB/s): min=48399, max=49792, per=99.99%, avg=49027.75, stdev=574.05, samples=4 00:31:34.590 iops : min=12099, max=12448, avg=12256.75, stdev=143.79, samples=4 00:31:34.590 lat (msec) : 2=0.01%, 4=2.59%, 10=97.41% 00:31:34.590 cpu : usr=63.32%, sys=30.64%, ctx=32, majf=0, minf=5 00:31:34.590 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:34.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:34.590 issued rwts: total=24650,24577,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.590 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:34.590 00:31:34.590 Run status group 0 (all jobs): 00:31:34.590 READ: bw=48.0MiB/s (50.4MB/s), 48.0MiB/s-48.0MiB/s (50.4MB/s-50.4MB/s), io=96.3MiB (101MB), run=2005-2005msec 00:31:34.590 WRITE: bw=47.9MiB/s (50.2MB/s), 47.9MiB/s-47.9MiB/s (50.2MB/s-50.2MB/s), io=96.0MiB (101MB), run=2005-2005msec 00:31:34.590 13:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:34.590 13:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:34.590 13:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:34.590 13:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:34.590 13:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:34.590 13:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:34.590 13:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:34.591 13:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:34.591 13:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:34.591 13:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:34.591 13:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:34.591 13:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:34.591 13:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:34.591 13:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:34.591 13:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:34.591 13:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:34.591 13:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:34.591 13:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:34.591 13:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:34.591 13:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:34.591 13:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:34.591 13:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:34.849 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:34.849 fio-3.35 00:31:34.849 Starting 1 thread 00:31:34.849 EAL: No free 2048 kB hugepages reported on node 1 00:31:37.384 00:31:37.384 test: (groupid=0, jobs=1): err= 0: pid=442190: Thu Jul 25 13:59:33 2024 00:31:37.384 read: IOPS=10.6k, BW=166MiB/s (174MB/s)(332MiB/2004msec) 00:31:37.384 slat (usec): min=2, max=115, avg= 2.71, stdev= 1.93 00:31:37.384 clat (usec): min=1821, max=23235, avg=7382.93, stdev=2293.30 00:31:37.384 lat (usec): min=1824, max=23237, avg=7385.64, stdev=2293.61 00:31:37.384 clat percentiles (usec): 00:31:37.384 | 1.00th=[ 3621], 5.00th=[ 4424], 10.00th=[ 4883], 20.00th=[ 5604], 00:31:37.384 | 30.00th=[ 6128], 40.00th=[ 6587], 50.00th=[ 7046], 60.00th=[ 7504], 00:31:37.384 | 70.00th=[ 8029], 80.00th=[ 8717], 90.00th=[10159], 95.00th=[12780], 00:31:37.384 | 99.00th=[14746], 99.50th=[15008], 99.90th=[15795], 99.95th=[16057], 00:31:37.384 | 99.99th=[16319] 00:31:37.384 bw ( KiB/s): min=74944, max=96544, per=49.67%, avg=84368.00, stdev=9064.55, samples=4 00:31:37.384 iops : min= 4684, max= 6034, avg=5273.00, stdev=566.53, samples=4 00:31:37.384 write: IOPS=6357, BW=99.3MiB/s (104MB/s)(172MiB/1736msec); 0 zone resets 00:31:37.384 slat (usec): min=28, max=394, avg=30.21, stdev= 7.87 00:31:37.384 clat (usec): min=3842, max=18089, avg=8356.25, stdev=1712.49 00:31:37.384 lat (usec): min=3872, max=18121, avg=8386.47, stdev=1715.28 00:31:37.384 clat percentiles (usec): 00:31:37.384 | 1.00th=[ 5735], 5.00th=[ 6259], 10.00th=[ 6587], 20.00th=[ 6980], 00:31:37.384 | 30.00th=[ 7373], 40.00th=[ 7701], 50.00th=[ 8094], 60.00th=[ 8455], 00:31:37.384 | 70.00th=[ 8848], 80.00th=[ 9503], 90.00th=[10290], 95.00th=[11338], 00:31:37.384 | 99.00th=[15008], 99.50th=[15533], 99.90th=[16909], 99.95th=[17171], 00:31:37.384 | 99.99th=[17957] 00:31:37.384 bw ( KiB/s): min=78112, max=100352, per=86.54%, avg=88032.00, stdev=9216.85, samples=4 00:31:37.384 iops : min= 4882, max= 6272, avg=5502.00, stdev=576.05, samples=4 00:31:37.384 lat (msec) : 2=0.01%, 4=1.77%, 10=87.03%, 20=11.18%, 50=0.01% 00:31:37.384 cpu : usr=78.98%, sys=17.12%, ctx=36, majf=0, minf=2 00:31:37.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:31:37.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:37.384 issued rwts: total=21275,11037,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.384 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:37.384 00:31:37.384 Run status group 0 (all jobs): 00:31:37.384 READ: bw=166MiB/s (174MB/s), 166MiB/s-166MiB/s (174MB/s-174MB/s), io=332MiB (349MB), run=2004-2004msec 00:31:37.384 WRITE: bw=99.3MiB/s (104MB/s), 99.3MiB/s-99.3MiB/s (104MB/s-104MB/s), io=172MiB (181MB), run=1736-1736msec 00:31:37.384 13:59:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:37.384 13:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:37.384 13:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:37.384 13:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:37.384 13:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:31:37.384 13:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:31:37.384 13:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:37.384 13:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:37.384 13:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:31:37.643 13:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:31:37.643 13:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:31:37.643 13:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 10.0.0.2 00:31:40.934 Nvme0n1 00:31:40.934 13:59:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:45.128 13:59:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=043e534c-99f8-476b-ad41-1b2506abe4cf 00:31:45.128 13:59:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 043e534c-99f8-476b-ad41-1b2506abe4cf 00:31:45.128 13:59:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=043e534c-99f8-476b-ad41-1b2506abe4cf 00:31:45.128 13:59:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:45.128 13:59:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:45.129 13:59:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:45.129 13:59:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:45.388 13:59:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:45.388 { 00:31:45.388 "uuid": "043e534c-99f8-476b-ad41-1b2506abe4cf", 00:31:45.388 "name": "lvs_0", 00:31:45.388 "base_bdev": "Nvme0n1", 00:31:45.388 "total_data_clusters": 1489, 00:31:45.388 "free_clusters": 1489, 00:31:45.388 "block_size": 512, 00:31:45.388 "cluster_size": 1073741824 00:31:45.388 } 00:31:45.388 ]' 00:31:45.388 13:59:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="043e534c-99f8-476b-ad41-1b2506abe4cf") .free_clusters' 00:31:45.388 13:59:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1489 00:31:45.388 13:59:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="043e534c-99f8-476b-ad41-1b2506abe4cf") .cluster_size' 00:31:45.388 13:59:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:31:45.388 13:59:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1524736 00:31:45.388 13:59:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1524736 00:31:45.388 1524736 00:31:45.388 13:59:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1524736 00:31:45.647 481e0f34-9757-49c9-a09c-1f13098cba70 00:31:45.648 13:59:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:45.938 13:59:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:45.938 13:59:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:46.204 13:59:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:46.204 13:59:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:46.204 13:59:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:46.204 13:59:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:46.204 13:59:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:46.204 13:59:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:46.204 13:59:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:46.204 13:59:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:46.204 13:59:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:46.204 13:59:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:46.204 13:59:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:46.204 13:59:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:46.204 13:59:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:46.204 13:59:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:46.204 13:59:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:46.204 13:59:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:46.204 13:59:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:46.204 13:59:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:46.204 13:59:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:46.204 13:59:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:46.204 13:59:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:46.204 13:59:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:46.463 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:46.463 fio-3.35 00:31:46.463 Starting 1 thread 00:31:46.463 EAL: No free 2048 kB hugepages reported on node 1 00:31:48.997 00:31:48.997 test: (groupid=0, jobs=1): err= 0: pid=444222: Thu Jul 25 13:59:45 2024 00:31:48.997 read: IOPS=7928, BW=31.0MiB/s (32.5MB/s)(62.1MiB/2006msec) 00:31:48.997 slat (nsec): min=1512, max=92090, avg=1624.13, stdev=1027.82 00:31:48.997 clat (usec): min=340, max=270767, avg=8733.20, stdev=15924.57 00:31:48.997 lat (usec): min=341, max=270770, avg=8734.82, stdev=15924.64 00:31:48.997 clat percentiles (msec): 00:31:48.997 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:31:48.997 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 8], 00:31:48.997 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 9], 00:31:48.997 | 99.00th=[ 10], 99.50th=[ 12], 99.90th=[ 271], 99.95th=[ 271], 00:31:48.997 | 99.99th=[ 271] 00:31:48.997 bw ( KiB/s): min=16136, max=37136, per=99.84%, avg=31662.00, stdev=10354.93, samples=4 00:31:48.997 iops : min= 4034, max= 9284, avg=7915.50, stdev=2588.73, samples=4 00:31:48.997 write: IOPS=7898, BW=30.9MiB/s (32.4MB/s)(61.9MiB/2006msec); 0 zone resets 00:31:48.997 slat (nsec): min=1547, max=80108, avg=1686.00, stdev=685.75 00:31:48.997 clat (usec): min=397, max=268699, avg=7308.62, stdev=16980.96 00:31:48.997 lat (usec): min=398, max=268704, avg=7310.31, stdev=16981.08 00:31:48.997 clat percentiles (msec): 00:31:48.997 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:31:48.997 | 30.00th=[ 6], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:31:48.997 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 7], 95.00th=[ 8], 00:31:48.997 | 99.00th=[ 8], 99.50th=[ 11], 99.90th=[ 271], 99.95th=[ 271], 00:31:48.997 | 99.99th=[ 271] 00:31:48.997 bw ( KiB/s): min=17144, max=36616, per=99.96%, avg=31582.00, stdev=9626.61, samples=4 00:31:48.997 iops : min= 4286, max= 9154, avg=7895.50, stdev=2406.65, samples=4 00:31:48.997 lat (usec) : 500=0.02%, 750=0.02%, 1000=0.03% 00:31:48.997 lat (msec) : 2=0.07%, 4=0.17%, 10=99.14%, 20=0.14%, 500=0.40% 00:31:48.997 cpu : usr=59.95%, sys=36.36%, ctx=87, majf=0, minf=5 00:31:48.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:48.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:48.997 issued rwts: total=15904,15845,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.998 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.998 00:31:48.998 Run status group 0 (all jobs): 00:31:48.998 READ: bw=31.0MiB/s (32.5MB/s), 31.0MiB/s-31.0MiB/s (32.5MB/s-32.5MB/s), io=62.1MiB (65.1MB), run=2006-2006msec 00:31:48.998 WRITE: bw=30.9MiB/s (32.4MB/s), 30.9MiB/s-30.9MiB/s (32.4MB/s-32.4MB/s), io=61.9MiB (64.9MB), run=2006-2006msec 00:31:48.998 13:59:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:49.257 13:59:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:50.194 13:59:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=92bc3181-289b-40fe-9bfb-9a978e4a11ba 00:31:50.194 13:59:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 92bc3181-289b-40fe-9bfb-9a978e4a11ba 00:31:50.194 13:59:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=92bc3181-289b-40fe-9bfb-9a978e4a11ba 00:31:50.194 13:59:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:50.194 13:59:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:50.194 13:59:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:50.194 13:59:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:50.453 13:59:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:50.453 { 00:31:50.453 "uuid": "043e534c-99f8-476b-ad41-1b2506abe4cf", 00:31:50.453 "name": "lvs_0", 00:31:50.453 "base_bdev": "Nvme0n1", 00:31:50.453 "total_data_clusters": 1489, 00:31:50.453 "free_clusters": 0, 00:31:50.453 "block_size": 512, 00:31:50.453 "cluster_size": 1073741824 00:31:50.453 }, 00:31:50.453 { 00:31:50.453 "uuid": "92bc3181-289b-40fe-9bfb-9a978e4a11ba", 00:31:50.453 "name": "lvs_n_0", 00:31:50.453 "base_bdev": "481e0f34-9757-49c9-a09c-1f13098cba70", 00:31:50.453 "total_data_clusters": 380811, 00:31:50.453 "free_clusters": 380811, 00:31:50.453 "block_size": 512, 00:31:50.453 "cluster_size": 4194304 00:31:50.453 } 00:31:50.453 ]' 00:31:50.453 13:59:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="92bc3181-289b-40fe-9bfb-9a978e4a11ba") .free_clusters' 00:31:50.453 13:59:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=380811 00:31:50.453 13:59:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="92bc3181-289b-40fe-9bfb-9a978e4a11ba") .cluster_size' 00:31:50.453 13:59:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:50.453 13:59:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1523244 00:31:50.453 13:59:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1523244 00:31:50.453 1523244 00:31:50.453 13:59:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1523244 00:31:51.391 bfeed8c2-2060-4197-bf3c-7d67c7192dbe 00:31:51.391 13:59:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:51.392 13:59:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:51.650 13:59:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:51.651 13:59:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:51.651 13:59:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:51.651 13:59:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:51.651 13:59:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:51.651 13:59:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:51.651 13:59:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:51.651 13:59:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:51.651 13:59:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:51.651 13:59:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:51.651 13:59:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:51.651 13:59:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:51.651 13:59:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:51.651 13:59:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:51.651 13:59:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:51.651 13:59:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:51.651 13:59:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:51.651 13:59:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:51.651 13:59:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:51.931 13:59:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:51.931 13:59:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:51.931 13:59:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:51.931 13:59:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:52.190 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:52.190 fio-3.35 00:31:52.190 Starting 1 thread 00:31:52.190 EAL: No free 2048 kB hugepages reported on node 1 00:31:54.727 00:31:54.727 test: (groupid=0, jobs=1): err= 0: pid=445193: Thu Jul 25 13:59:51 2024 00:31:54.727 read: IOPS=8054, BW=31.5MiB/s (33.0MB/s)(63.1MiB/2006msec) 00:31:54.727 slat (nsec): min=1508, max=91504, avg=1599.98, stdev=988.78 00:31:54.727 clat (usec): min=2739, max=14068, avg=8804.71, stdev=713.70 00:31:54.727 lat (usec): min=2753, max=14069, avg=8806.31, stdev=713.64 00:31:54.727 clat percentiles (usec): 00:31:54.727 | 1.00th=[ 7177], 5.00th=[ 7701], 10.00th=[ 7963], 20.00th=[ 8291], 00:31:54.727 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 8979], 00:31:54.727 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9634], 95.00th=[ 9896], 00:31:54.727 | 99.00th=[10421], 99.50th=[10552], 99.90th=[12125], 99.95th=[13042], 00:31:54.727 | 99.99th=[14091] 00:31:54.727 bw ( KiB/s): min=30952, max=32736, per=99.90%, avg=32186.00, stdev=843.19, samples=4 00:31:54.727 iops : min= 7738, max= 8184, avg=8046.50, stdev=210.80, samples=4 00:31:54.727 write: IOPS=8035, BW=31.4MiB/s (32.9MB/s)(63.0MiB/2006msec); 0 zone resets 00:31:54.727 slat (nsec): min=1542, max=80572, avg=1677.19, stdev=702.31 00:31:54.727 clat (usec): min=1481, max=12155, avg=7000.99, stdev=629.02 00:31:54.727 lat (usec): min=1485, max=12156, avg=7002.67, stdev=629.00 00:31:54.727 clat percentiles (usec): 00:31:54.727 | 1.00th=[ 5538], 5.00th=[ 5997], 10.00th=[ 6259], 20.00th=[ 6521], 00:31:54.727 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 7046], 60.00th=[ 7111], 00:31:54.727 | 70.00th=[ 7308], 80.00th=[ 7504], 90.00th=[ 7767], 95.00th=[ 7963], 00:31:54.727 | 99.00th=[ 8356], 99.50th=[ 8586], 99.90th=[11207], 99.95th=[11994], 00:31:54.727 | 99.99th=[12125] 00:31:54.727 bw ( KiB/s): min=31872, max=32256, per=99.91%, avg=32112.00, stdev=168.32, samples=4 00:31:54.727 iops : min= 7968, max= 8064, avg=8028.00, stdev=42.08, samples=4 00:31:54.727 lat (msec) : 2=0.01%, 4=0.11%, 10=97.81%, 20=2.08% 00:31:54.727 cpu : usr=59.75%, sys=35.36%, ctx=75, majf=0, minf=5 00:31:54.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:54.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:54.728 issued rwts: total=16157,16119,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:54.728 00:31:54.728 Run status group 0 (all jobs): 00:31:54.728 READ: bw=31.5MiB/s (33.0MB/s), 31.5MiB/s-31.5MiB/s (33.0MB/s-33.0MB/s), io=63.1MiB (66.2MB), run=2006-2006msec 00:31:54.728 WRITE: bw=31.4MiB/s (32.9MB/s), 31.4MiB/s-31.4MiB/s (32.9MB/s-32.9MB/s), io=63.0MiB (66.0MB), run=2006-2006msec 00:31:54.728 13:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:54.728 13:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:54.728 13:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:01.300 13:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:01.300 13:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:05.490 14:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:05.490 14:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:08.047 14:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:08.047 14:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:08.047 14:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:08.047 14:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:08.047 14:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:32:08.047 14:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:08.047 14:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:32:08.047 14:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:08.047 14:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:08.047 rmmod nvme_tcp 00:32:08.047 rmmod nvme_fabrics 00:32:08.047 rmmod nvme_keyring 00:32:08.047 14:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:08.047 14:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:32:08.047 14:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:32:08.047 14:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 440902 ']' 00:32:08.047 14:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 440902 00:32:08.337 14:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 440902 ']' 00:32:08.337 14:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 440902 00:32:08.337 14:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:32:08.337 14:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:08.337 14:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 440902 00:32:08.337 14:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:08.337 14:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:08.337 14:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 440902' 00:32:08.337 killing process with pid 440902 00:32:08.337 14:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 440902 00:32:08.337 14:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 440902 00:32:08.337 14:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:08.337 14:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:08.337 14:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:08.337 14:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:08.337 14:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:08.337 14:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:08.337 14:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:08.337 14:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:10.866 14:00:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:10.866 00:32:10.866 real 0m47.858s 00:32:10.866 user 3m16.607s 00:32:10.866 sys 0m11.200s 00:32:10.866 14:00:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:10.866 14:00:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.866 ************************************ 00:32:10.866 END TEST nvmf_fio_host 00:32:10.866 ************************************ 00:32:10.866 14:00:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:10.866 14:00:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:10.866 14:00:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:10.866 14:00:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.866 ************************************ 00:32:10.866 START TEST nvmf_failover 00:32:10.866 ************************************ 00:32:10.866 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:10.866 * Looking for test storage... 00:32:10.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:10.866 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:10.866 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:32:10.867 14:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:17.448 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:17.448 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:17.449 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:17.449 Found net devices under 0000:af:00.0: cvl_0_0 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:17.449 Found net devices under 0000:af:00.1: cvl_0_1 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:17.449 14:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:17.449 14:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:17.449 14:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:17.449 14:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:17.449 14:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:17.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:17.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:32:17.449 00:32:17.449 --- 10.0.0.2 ping statistics --- 00:32:17.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.449 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:32:17.449 14:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:17.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:17.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:32:17.449 00:32:17.449 --- 10.0.0.1 ping statistics --- 00:32:17.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.449 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:32:17.449 14:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:17.449 14:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:32:17.449 14:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:17.449 14:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:17.449 14:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:17.449 14:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:17.449 14:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:17.449 14:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:17.449 14:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:17.449 14:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:17.449 14:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:17.449 14:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:17.449 14:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:17.449 14:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=451980 00:32:17.449 14:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:17.449 14:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 451980 00:32:17.449 14:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 451980 ']' 00:32:17.449 14:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:17.449 14:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:17.449 14:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:17.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:17.449 14:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:17.449 14:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:17.449 [2024-07-25 14:00:14.210475] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:17.449 [2024-07-25 14:00:14.210531] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:17.449 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.449 [2024-07-25 14:00:14.250905] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:17.449 [2024-07-25 14:00:14.285888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:17.449 [2024-07-25 14:00:14.326132] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:17.449 [2024-07-25 14:00:14.326170] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:17.449 [2024-07-25 14:00:14.326180] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:17.449 [2024-07-25 14:00:14.326189] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:17.449 [2024-07-25 14:00:14.326213] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:17.449 [2024-07-25 14:00:14.326311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:17.449 [2024-07-25 14:00:14.326415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:17.449 [2024-07-25 14:00:14.326417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.386 14:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:18.386 14:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:32:18.386 14:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:18.386 14:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:18.386 14:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:18.386 14:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:18.386 14:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:18.386 [2024-07-25 14:00:15.210040] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:18.386 14:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:18.646 Malloc0 00:32:18.646 14:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:18.904 14:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:19.163 14:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:19.163 [2024-07-25 14:00:15.943021] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:19.163 14:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:19.423 [2024-07-25 14:00:16.111469] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:19.423 14:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:19.423 [2024-07-25 14:00:16.292053] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:19.682 14:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:19.682 14:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=452284 00:32:19.682 14:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:19.682 14:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 452284 /var/tmp/bdevperf.sock 00:32:19.682 14:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 452284 ']' 00:32:19.682 14:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:19.682 14:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:19.682 14:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:19.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:19.682 14:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:19.682 14:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:19.682 14:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:19.682 14:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:32:19.682 14:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:20.250 NVMe0n1 00:32:20.250 14:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:20.509 00:32:20.509 14:00:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=452539 00:32:20.509 14:00:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:20.509 14:00:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:21.445 14:00:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:21.705 [2024-07-25 14:00:18.389171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.705 [2024-07-25 14:00:18.389233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.705 [2024-07-25 14:00:18.389244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.705 [2024-07-25 14:00:18.389253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.705 [2024-07-25 14:00:18.389266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.705 [2024-07-25 14:00:18.389275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.705 [2024-07-25 14:00:18.389283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.705 [2024-07-25 14:00:18.389291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.705 [2024-07-25 14:00:18.389300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.705 [2024-07-25 14:00:18.389309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.705 [2024-07-25 14:00:18.389317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.705 [2024-07-25 14:00:18.389326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.705 [2024-07-25 14:00:18.389334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.705 [2024-07-25 14:00:18.389342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.705 [2024-07-25 14:00:18.389351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.705 [2024-07-25 14:00:18.389359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.706 [2024-07-25 14:00:18.389368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.706 [2024-07-25 14:00:18.389376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.706 [2024-07-25 14:00:18.389384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.706 [2024-07-25 14:00:18.389392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.706 [2024-07-25 14:00:18.389401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.706 [2024-07-25 14:00:18.389409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.706 [2024-07-25 14:00:18.389418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.706 [2024-07-25 14:00:18.389427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.706 [2024-07-25 14:00:18.389435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.706 [2024-07-25 14:00:18.389444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.706 [2024-07-25 14:00:18.389452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.706 [2024-07-25 14:00:18.389461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.706 [2024-07-25 14:00:18.389470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.706 [2024-07-25 14:00:18.389478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.706 [2024-07-25 14:00:18.389487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694cf0 is same with the state(5) to be set 00:32:21.706 14:00:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:24.994 14:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:24.994 00:32:24.994 14:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:24.995 [2024-07-25 14:00:21.867303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695a70 is same with the state(5) to be set 00:32:24.995 [2024-07-25 14:00:21.867345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695a70 is same with the state(5) to be set 00:32:24.995 [2024-07-25 14:00:21.867356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695a70 is same with the state(5) to be set 00:32:24.995 [2024-07-25 14:00:21.867365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695a70 is same with the state(5) to be set 00:32:24.995 [2024-07-25 14:00:21.867374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695a70 is same with the state(5) to be set 00:32:24.995 [2024-07-25 14:00:21.867382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695a70 is same with the state(5) to be set 00:32:24.995 [2024-07-25 14:00:21.867391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695a70 is same with the state(5) to be set 00:32:24.995 [2024-07-25 14:00:21.867399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695a70 is same with the state(5) to be set 00:32:24.995 [2024-07-25 14:00:21.867408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695a70 is same with the state(5) to be set 00:32:24.995 [2024-07-25 14:00:21.867416] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695a70 is same with the state(5) to be set 00:32:24.995 [2024-07-25 14:00:21.867425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695a70 is same with the state(5) to be set 00:32:24.995 [2024-07-25 14:00:21.867433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695a70 is same with the state(5) to be set 00:32:24.995 [2024-07-25 14:00:21.867442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695a70 is same with the state(5) to be set 00:32:24.995 [2024-07-25 14:00:21.867450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695a70 is same with the state(5) to be set 00:32:24.995 [2024-07-25 14:00:21.867459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695a70 is same with the state(5) to be set 00:32:24.995 [2024-07-25 14:00:21.867467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695a70 is same with the state(5) to be set 00:32:24.995 [2024-07-25 14:00:21.867476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695a70 is same with the state(5) to be set 00:32:24.995 [2024-07-25 14:00:21.867484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695a70 is same with the state(5) to be set 00:32:24.995 [2024-07-25 14:00:21.867492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695a70 is same with the state(5) to be set 00:32:24.995 [2024-07-25 14:00:21.867501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695a70 is same with the state(5) to be set 00:32:24.995 [2024-07-25 14:00:21.867509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695a70 is same with the state(5) to be set 00:32:24.995 [2024-07-25 14:00:21.867522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695a70 is same with the state(5) to be set 00:32:24.995 [2024-07-25 14:00:21.867531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695a70 is same with the state(5) to be set 00:32:24.995 [2024-07-25 14:00:21.867544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695a70 is same with the state(5) to be set 00:32:25.252 14:00:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:28.536 14:00:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:28.536 [2024-07-25 14:00:25.063349] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:28.536 14:00:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:29.516 14:00:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:29.516 [2024-07-25 14:00:26.263170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696860 is same with the state(5) to be set 00:32:29.516 [2024-07-25 14:00:26.263209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696860 is same with the state(5) to be set 00:32:29.516 [2024-07-25 14:00:26.263219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696860 is same with the state(5) to be set 00:32:29.516 [2024-07-25 14:00:26.263228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696860 is same with the state(5) to be set 00:32:29.516 [2024-07-25 14:00:26.263237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696860 is same with the state(5) to be set 00:32:29.516 [2024-07-25 14:00:26.263246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696860 is same with the state(5) to be set 00:32:29.516 [2024-07-25 14:00:26.263255] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696860 is same with the state(5) to be set 00:32:29.516 [2024-07-25 14:00:26.263264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696860 is same with the state(5) to be set 00:32:29.516 [2024-07-25 14:00:26.263273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696860 is same with the state(5) to be set 00:32:29.516 [2024-07-25 14:00:26.263282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696860 is same with the state(5) to be set 00:32:29.516 [2024-07-25 14:00:26.263291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696860 is same with the state(5) to be set 00:32:29.516 [2024-07-25 14:00:26.263299] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696860 is same with the state(5) to be set 00:32:29.516 [2024-07-25 14:00:26.263308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696860 is same with the state(5) to be set 00:32:29.516 [2024-07-25 14:00:26.263316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696860 is same with the state(5) to be set 00:32:29.516 [2024-07-25 14:00:26.263325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696860 is same with the state(5) to be set 00:32:29.516 [2024-07-25 14:00:26.263333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696860 is same with the state(5) to be set 00:32:29.516 [2024-07-25 14:00:26.263341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696860 is same with the state(5) to be set 00:32:29.516 [2024-07-25 14:00:26.263350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696860 is same with the state(5) to be set 00:32:29.516 [2024-07-25 14:00:26.263358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696860 is same with the state(5) to be set 00:32:29.516 [2024-07-25 14:00:26.263367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696860 is same with the state(5) to be set 00:32:29.516 [2024-07-25 14:00:26.263376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696860 is same with the state(5) to be set 00:32:29.516 [2024-07-25 14:00:26.263393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696860 is same with the state(5) to be set 00:32:29.516 14:00:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 452539 00:32:36.094 0 00:32:36.094 14:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 452284 00:32:36.094 14:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 452284 ']' 00:32:36.094 14:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 452284 00:32:36.094 14:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:36.094 14:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:36.094 14:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 452284 00:32:36.094 14:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:36.094 14:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:36.094 14:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 452284' 00:32:36.094 killing process with pid 452284 00:32:36.094 14:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 452284 00:32:36.094 14:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 452284 00:32:36.094 14:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:36.094 [2024-07-25 14:00:16.351109] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:36.094 [2024-07-25 14:00:16.351164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid452284 ] 00:32:36.094 EAL: No free 2048 kB hugepages reported on node 1 00:32:36.094 [2024-07-25 14:00:16.387318] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:36.094 [2024-07-25 14:00:16.422640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.094 [2024-07-25 14:00:16.461379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:36.094 Running I/O for 15 seconds... 00:32:36.094 [2024-07-25 14:00:18.389793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:106456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.094 [2024-07-25 14:00:18.389832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.094 [2024-07-25 14:00:18.389850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:106464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.094 [2024-07-25 14:00:18.389860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.094 [2024-07-25 14:00:18.389872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:106472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.094 [2024-07-25 14:00:18.389881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.094 [2024-07-25 14:00:18.389893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.094 [2024-07-25 14:00:18.389902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.094 [2024-07-25 14:00:18.389912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:106488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.094 [2024-07-25 14:00:18.389922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.094 [2024-07-25 14:00:18.389932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:106496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.094 [2024-07-25 14:00:18.389941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.094 [2024-07-25 14:00:18.389952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:106504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.094 [2024-07-25 14:00:18.389961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.094 [2024-07-25 14:00:18.389972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:106512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.094 [2024-07-25 14:00:18.389981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.094 [2024-07-25 14:00:18.389991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.094 [2024-07-25 14:00:18.390000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.094 [2024-07-25 14:00:18.390011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.094 [2024-07-25 14:00:18.390020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.094 [2024-07-25 14:00:18.390035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.094 [2024-07-25 14:00:18.390045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.094 [2024-07-25 14:00:18.390056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.094 [2024-07-25 14:00:18.390065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.094 [2024-07-25 14:00:18.390076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.094 [2024-07-25 14:00:18.390085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.094 [2024-07-25 14:00:18.390096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.094 [2024-07-25 14:00:18.390105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.094 [2024-07-25 14:00:18.390116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:106568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.094 [2024-07-25 14:00:18.390125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.094 [2024-07-25 14:00:18.390136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.094 [2024-07-25 14:00:18.390145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.094 [2024-07-25 14:00:18.390156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.094 [2024-07-25 14:00:18.390166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.094 [2024-07-25 14:00:18.390177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:106976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.094 [2024-07-25 14:00:18.390186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.094 [2024-07-25 14:00:18.390197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.094 [2024-07-25 14:00:18.390206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.094 [2024-07-25 14:00:18.390217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:106992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.094 [2024-07-25 14:00:18.390226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.094 [2024-07-25 14:00:18.390236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.094 [2024-07-25 14:00:18.390245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.094 [2024-07-25 14:00:18.390256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.094 [2024-07-25 14:00:18.390265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.094 [2024-07-25 14:00:18.390276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.094 [2024-07-25 14:00:18.390285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.094 [2024-07-25 14:00:18.390297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.094 [2024-07-25 14:00:18.390306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.094 [2024-07-25 14:00:18.390317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.094 [2024-07-25 14:00:18.390326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.094 [2024-07-25 14:00:18.390336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.094 [2024-07-25 14:00:18.390345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.094 [2024-07-25 14:00:18.390356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.094 [2024-07-25 14:00:18.390365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.094 [2024-07-25 14:00:18.390376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.094 [2024-07-25 14:00:18.390385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.095 [2024-07-25 14:00:18.390404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.095 [2024-07-25 14:00:18.390424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.095 [2024-07-25 14:00:18.390443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.095 [2024-07-25 14:00:18.390463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.095 [2024-07-25 14:00:18.390483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.095 [2024-07-25 14:00:18.390503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.095 [2024-07-25 14:00:18.390523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.095 [2024-07-25 14:00:18.390544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.095 [2024-07-25 14:00:18.390564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.095 [2024-07-25 14:00:18.390583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.095 [2024-07-25 14:00:18.390602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.095 [2024-07-25 14:00:18.390622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.095 [2024-07-25 14:00:18.390642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.095 [2024-07-25 14:00:18.390662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.095 [2024-07-25 14:00:18.390681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.095 [2024-07-25 14:00:18.390701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.095 [2024-07-25 14:00:18.390725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.095 [2024-07-25 14:00:18.390745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.095 [2024-07-25 14:00:18.390765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.390784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.390806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.390826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.390846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:106608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.390866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.390886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:106624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.390906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:106632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.390926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.095 [2024-07-25 14:00:18.390946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.095 [2024-07-25 14:00:18.390966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.095 [2024-07-25 14:00:18.390986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.390996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.095 [2024-07-25 14:00:18.391005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.391016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.095 [2024-07-25 14:00:18.391025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.391036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.095 [2024-07-25 14:00:18.391047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.391057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.095 [2024-07-25 14:00:18.391066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.391076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.095 [2024-07-25 14:00:18.391085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.391096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:106640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.391105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.391116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.391125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.391136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.391145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.391156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.391165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.391175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.391184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.391195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:106680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.391204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.391215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:106688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.391224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.391234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.391243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.391253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.391263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.391273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:106712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.391282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.391294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:106720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.391303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.391314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.391323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.391333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.391342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.391353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.391362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.391372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.391381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.391392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:106760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.391401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.391412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.391421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.391431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.391444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.391455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.391464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.095 [2024-07-25 14:00:18.391474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:106792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.095 [2024-07-25 14:00:18.391483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.391494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:106800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:18.391503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.391514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:106808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:18.391523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.391533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:106816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:18.391543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.391554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:18.391563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.391574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:18.391583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.391593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:106840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:18.391603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.391613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:18.391622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.391633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:106856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:18.391642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.391653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:18.391663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.391673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:106872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:18.391682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.391693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:106880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:18.391702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.391712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:18.391726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.391737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.096 [2024-07-25 14:00:18.391746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.391756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.096 [2024-07-25 14:00:18.391767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.391777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.096 [2024-07-25 14:00:18.391787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.391797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.096 [2024-07-25 14:00:18.391808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.391818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.096 [2024-07-25 14:00:18.391827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.391838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.096 [2024-07-25 14:00:18.391847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.391857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.096 [2024-07-25 14:00:18.391866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.391877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.096 [2024-07-25 14:00:18.391886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.391897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:18.391906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.391917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:18.391926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.391937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:106912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:18.391946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.391957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:18.391966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.391977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:18.391986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.391997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:18.392006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.392017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:18.392026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.392037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:106952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:18.392046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.392058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.096 [2024-07-25 14:00:18.392067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.392077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.096 [2024-07-25 14:00:18.392087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.392098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.096 [2024-07-25 14:00:18.392107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.392118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.096 [2024-07-25 14:00:18.392127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.392137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.096 [2024-07-25 14:00:18.392146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.392157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.096 [2024-07-25 14:00:18.392166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.392176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.096 [2024-07-25 14:00:18.392186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.392197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.096 [2024-07-25 14:00:18.392206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.392216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.096 [2024-07-25 14:00:18.392225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.392236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.096 [2024-07-25 14:00:18.392245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.392255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.096 [2024-07-25 14:00:18.392264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.392275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.096 [2024-07-25 14:00:18.392284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.392294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.096 [2024-07-25 14:00:18.392304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.392315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.096 [2024-07-25 14:00:18.392324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.392335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.096 [2024-07-25 14:00:18.392344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.392355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.096 [2024-07-25 14:00:18.392364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.392386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.096 [2024-07-25 14:00:18.392394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.096 [2024-07-25 14:00:18.392403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107472 len:8 PRP1 0x0 PRP2 0x0 00:32:36.096 [2024-07-25 14:00:18.392412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.392456] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ce1270 was disconnected and freed. reset controller. 00:32:36.096 [2024-07-25 14:00:18.392467] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:36.096 [2024-07-25 14:00:18.392489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.096 [2024-07-25 14:00:18.392500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.392510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.096 [2024-07-25 14:00:18.392519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.392529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.096 [2024-07-25 14:00:18.392538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.392547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.096 [2024-07-25 14:00:18.392556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:18.392572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.096 [2024-07-25 14:00:18.395273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.096 [2024-07-25 14:00:18.395304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceddd0 (9): Bad file descriptor 00:32:36.096 [2024-07-25 14:00:18.549120] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:36.096 [2024-07-25 14:00:21.868251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:21.868287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:21.868308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:21.868318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:21.868329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:21.868339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:21.868349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:21.868359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:21.868370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:21.868379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:21.868390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:21.868399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:21.868410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:21.868419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:21.868430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:21.868439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:21.868450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:21.868459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.096 [2024-07-25 14:00:21.868470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.096 [2024-07-25 14:00:21.868479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.868490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.097 [2024-07-25 14:00:21.868499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.868510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.097 [2024-07-25 14:00:21.868519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.868529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.097 [2024-07-25 14:00:21.868539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.868549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.097 [2024-07-25 14:00:21.868560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.868570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.097 [2024-07-25 14:00:21.868579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.868590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.097 [2024-07-25 14:00:21.868599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.868610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.868621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.868631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.868641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.868652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.868661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.868672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.868681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.868691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.868700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.868711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.868727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.868737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.868747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.868757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.868766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.868777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.868786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.868796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.868805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.868817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.868826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.868837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.868846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.868857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.868866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.868877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.868886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.868896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.868905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.868915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.868924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.868935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.868945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.868956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.868966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.868976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.868985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.868996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.097 [2024-07-25 14:00:21.869670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.097 [2024-07-25 14:00:21.869679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.869690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.098 [2024-07-25 14:00:21.869699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.869709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.098 [2024-07-25 14:00:21.869722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.869733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.869742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.869752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.869761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.869772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.869781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.869792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.869801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.869812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.869821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.869833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.869842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.869853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.869863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.869874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.869883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.869893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.869902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.869913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.869922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.869933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.869942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.869952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.869961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.869971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.869981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.869991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.098 [2024-07-25 14:00:21.870822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.098 [2024-07-25 14:00:21.870863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.098 [2024-07-25 14:00:21.870872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99536 len:8 PRP1 0x0 PRP2 0x0 00:32:36.098 [2024-07-25 14:00:21.870881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870926] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d11bf0 was disconnected and freed. reset controller. 00:32:36.098 [2024-07-25 14:00:21.870937] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:36.098 [2024-07-25 14:00:21.870960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.098 [2024-07-25 14:00:21.870970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.098 [2024-07-25 14:00:21.870990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.870999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.098 [2024-07-25 14:00:21.871008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.871018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.098 [2024-07-25 14:00:21.871027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:21.871036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.098 [2024-07-25 14:00:21.873730] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.098 [2024-07-25 14:00:21.873762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceddd0 (9): Bad file descriptor 00:32:36.098 [2024-07-25 14:00:21.952659] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:36.098 [2024-07-25 14:00:26.263777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.098 [2024-07-25 14:00:26.263813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:26.263830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:27192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.098 [2024-07-25 14:00:26.263841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:26.263853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:27200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.098 [2024-07-25 14:00:26.263863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:26.263874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.098 [2024-07-25 14:00:26.263884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:26.263895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:27216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.098 [2024-07-25 14:00:26.263910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:26.263921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.098 [2024-07-25 14:00:26.263931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:26.263941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.098 [2024-07-25 14:00:26.263952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:26.263963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.098 [2024-07-25 14:00:26.263973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.098 [2024-07-25 14:00:26.263983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.263993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:27256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:26552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.264038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.264059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.264079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.264099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.264119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:26592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.264138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.264160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.264181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:27264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:27280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:27328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:27336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:27344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:27352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:27360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:27368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:27392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:27408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:27416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:27424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:27448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:27456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:27464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:27472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:27480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:27488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:27504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.264800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:26616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.264820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.264840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:26632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.264860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:26640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.264879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.264899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.264918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.264939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:26672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.264959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.264978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.264990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.264999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.265010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:26696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.265019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.265029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:26704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.265038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.265049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.265058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.265069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:26720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.265078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.265089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:26728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.265098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.265109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.099 [2024-07-25 14:00:26.265118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.265128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.265137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.265148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.265157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.265169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.265178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.265190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.265199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.265209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.265219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.265229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.265238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.265249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.265258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.265269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.265278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.265289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.265299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.265309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.265319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.265329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.265338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.265349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-25 14:00:26.265358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.099 [2024-07-25 14:00:26.265369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:26832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:26840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:26856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:26872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:26888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:26904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:26912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:26928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:26952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:27000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:27024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:27032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:27056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:27072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.265984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.265995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:27080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.266004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.266015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:27088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.266026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.266036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.266045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.266056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.266065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.266076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:27112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.266085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.266095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:27120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.266104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.266115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.266124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.266135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.266144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.266154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.266164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.266175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:27152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.266184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.266198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.266207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.266218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.266227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.266238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.100 [2024-07-25 14:00:26.266247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.266258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.100 [2024-07-25 14:00:26.266267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.266277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:27528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.100 [2024-07-25 14:00:26.266286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.266297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:27536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.100 [2024-07-25 14:00:26.266306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.266317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:27544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.100 [2024-07-25 14:00:26.266326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.266336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:27552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.100 [2024-07-25 14:00:26.266346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.266357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:27560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.100 [2024-07-25 14:00:26.266366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.266388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.100 [2024-07-25 14:00:26.266396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.100 [2024-07-25 14:00:26.266404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27568 len:8 PRP1 0x0 PRP2 0x0 00:32:36.100 [2024-07-25 14:00:26.266414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.266458] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d11bf0 was disconnected and freed. reset controller. 00:32:36.100 [2024-07-25 14:00:26.266469] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:36.100 [2024-07-25 14:00:26.266492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.100 [2024-07-25 14:00:26.266502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.266511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.100 [2024-07-25 14:00:26.266522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.266532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.100 [2024-07-25 14:00:26.266542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.266552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.100 [2024-07-25 14:00:26.266562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.100 [2024-07-25 14:00:26.266571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.100 [2024-07-25 14:00:26.269279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.100 [2024-07-25 14:00:26.269310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceddd0 (9): Bad file descriptor 00:32:36.100 [2024-07-25 14:00:26.302012] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:36.100 00:32:36.100 Latency(us) 00:32:36.100 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.100 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:36.100 Verification LBA range: start 0x0 length 0x4000 00:32:36.100 NVMe0n1 : 15.01 12081.97 47.20 867.56 0.00 9863.34 809.37 13421.77 00:32:36.100 =================================================================================================================== 00:32:36.100 Total : 12081.97 47.20 867.56 0.00 9863.34 809.37 13421.77 00:32:36.100 Received shutdown signal, test time was about 15.000000 seconds 00:32:36.100 00:32:36.100 Latency(us) 00:32:36.100 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.100 =================================================================================================================== 00:32:36.100 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:36.100 14:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:36.100 14:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:36.100 14:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:36.100 14:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=454920 00:32:36.100 14:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 454920 /var/tmp/bdevperf.sock 00:32:36.100 14:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:36.100 14:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 454920 ']' 00:32:36.100 14:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:36.100 14:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:36.100 14:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:36.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:36.100 14:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:36.100 14:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:36.100 14:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:36.100 14:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:32:36.100 14:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:36.359 [2024-07-25 14:00:33.006940] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:36.359 14:00:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:36.359 [2024-07-25 14:00:33.195446] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:36.359 14:00:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:36.625 NVMe0n1 00:32:36.888 14:00:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:37.146 00:32:37.147 14:00:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:37.147 00:32:37.406 14:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:37.406 14:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:37.406 14:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:37.665 14:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:40.953 14:00:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:40.953 14:00:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:40.953 14:00:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:40.953 14:00:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=455724 00:32:40.953 14:00:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 455724 00:32:41.890 0 00:32:41.890 14:00:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:41.890 [2024-07-25 14:00:32.653626] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:41.890 [2024-07-25 14:00:32.653684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid454920 ] 00:32:41.890 EAL: No free 2048 kB hugepages reported on node 1 00:32:41.890 [2024-07-25 14:00:32.689757] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:41.890 [2024-07-25 14:00:32.726016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.890 [2024-07-25 14:00:32.761373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:41.890 [2024-07-25 14:00:34.357720] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:41.890 [2024-07-25 14:00:34.357765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:41.890 [2024-07-25 14:00:34.357780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.890 [2024-07-25 14:00:34.357792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:41.890 [2024-07-25 14:00:34.357802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.890 [2024-07-25 14:00:34.357812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:41.890 [2024-07-25 14:00:34.357821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.890 [2024-07-25 14:00:34.357830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:41.890 [2024-07-25 14:00:34.357840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.890 [2024-07-25 14:00:34.357849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.890 [2024-07-25 14:00:34.357877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.890 [2024-07-25 14:00:34.357895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116ddd0 (9): Bad file descriptor 00:32:41.890 [2024-07-25 14:00:34.403242] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:41.890 Running I/O for 1 seconds... 00:32:41.890 00:32:41.890 Latency(us) 00:32:41.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:41.890 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:41.890 Verification LBA range: start 0x0 length 0x4000 00:32:41.890 NVMe0n1 : 1.01 12121.39 47.35 0.00 0.00 10519.86 2280.65 14470.35 00:32:41.890 =================================================================================================================== 00:32:41.890 Total : 12121.39 47.35 0.00 0.00 10519.86 2280.65 14470.35 00:32:41.890 14:00:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:41.890 14:00:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:42.148 14:00:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:42.148 14:00:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:42.148 14:00:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:42.406 14:00:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:42.664 14:00:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:45.949 14:00:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:45.949 14:00:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:45.949 14:00:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 454920 00:32:45.949 14:00:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 454920 ']' 00:32:45.949 14:00:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 454920 00:32:45.949 14:00:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:45.949 14:00:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:45.949 14:00:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 454920 00:32:45.949 14:00:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:45.949 14:00:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:45.949 14:00:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 454920' 00:32:45.949 killing process with pid 454920 00:32:45.949 14:00:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 454920 00:32:45.949 14:00:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 454920 00:32:45.949 14:00:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:45.949 14:00:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:46.207 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:46.207 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:46.207 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:46.207 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:46.207 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:32:46.207 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:46.207 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:32:46.207 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:46.207 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:46.207 rmmod nvme_tcp 00:32:46.207 rmmod nvme_fabrics 00:32:46.207 rmmod nvme_keyring 00:32:46.207 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:46.465 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:32:46.465 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:32:46.465 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 451980 ']' 00:32:46.465 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 451980 00:32:46.465 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 451980 ']' 00:32:46.465 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 451980 00:32:46.465 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:46.465 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:46.465 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 451980 00:32:46.465 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:46.465 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:46.465 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 451980' 00:32:46.465 killing process with pid 451980 00:32:46.465 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 451980 00:32:46.465 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 451980 00:32:46.725 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:46.725 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:46.725 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:46.725 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:46.725 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:46.725 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:46.725 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:46.725 14:00:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:48.625 14:00:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:48.625 00:32:48.625 real 0m38.091s 00:32:48.625 user 1m56.457s 00:32:48.625 sys 0m9.795s 00:32:48.625 14:00:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:48.625 14:00:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:48.625 ************************************ 00:32:48.625 END TEST nvmf_failover 00:32:48.626 ************************************ 00:32:48.626 14:00:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:48.626 14:00:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:48.626 14:00:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:48.626 14:00:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.884 ************************************ 00:32:48.884 START TEST nvmf_host_discovery 00:32:48.884 ************************************ 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:48.884 * Looking for test storage... 00:32:48.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:32:48.884 14:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:55.475 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:55.475 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:55.475 Found net devices under 0000:af:00.0: cvl_0_0 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:55.475 Found net devices under 0000:af:00.1: cvl_0_1 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:55.475 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:55.476 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:55.476 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:55.476 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:55.476 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:55.476 14:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:55.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:55.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:32:55.476 00:32:55.476 --- 10.0.0.2 ping statistics --- 00:32:55.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:55.476 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:55.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:55.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:32:55.476 00:32:55.476 --- 10.0.0.1 ping statistics --- 00:32:55.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:55.476 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=460244 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 460244 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 460244 ']' 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:55.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:55.476 14:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.476 [2024-07-25 14:00:52.304309] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:55.476 [2024-07-25 14:00:52.304359] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:55.476 EAL: No free 2048 kB hugepages reported on node 1 00:32:55.476 [2024-07-25 14:00:52.344391] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:55.734 [2024-07-25 14:00:52.378967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:55.734 [2024-07-25 14:00:52.416601] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:55.734 [2024-07-25 14:00:52.416646] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:55.734 [2024-07-25 14:00:52.416655] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:55.734 [2024-07-25 14:00:52.416664] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:55.734 [2024-07-25 14:00:52.416671] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:55.734 [2024-07-25 14:00:52.416695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:56.302 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:56.302 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:32:56.302 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:56.302 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:56.302 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:56.302 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:56.302 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:56.302 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.302 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:56.302 [2024-07-25 14:00:53.157217] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:56.302 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.302 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:56.302 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.302 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:56.302 [2024-07-25 14:00:53.165357] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:56.302 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.302 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:56.302 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.302 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:56.302 null0 00:32:56.302 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.302 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:56.302 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.302 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:56.302 null1 00:32:56.302 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.302 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:56.302 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.302 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:56.561 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.561 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=460477 00:32:56.561 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:56.561 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 460477 /tmp/host.sock 00:32:56.561 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 460477 ']' 00:32:56.561 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:32:56.561 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:56.561 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:56.561 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:56.561 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:56.561 14:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:56.561 [2024-07-25 14:00:53.241931] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:56.561 [2024-07-25 14:00:53.241978] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460477 ] 00:32:56.561 EAL: No free 2048 kB hugepages reported on node 1 00:32:56.561 [2024-07-25 14:00:53.278926] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:56.561 [2024-07-25 14:00:53.311855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.561 [2024-07-25 14:00:53.350037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.502 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:57.502 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:32:57.502 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:57.502 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:57.502 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.502 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.502 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.502 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:57.502 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.502 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.502 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.502 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:57.502 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:57.502 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:57.502 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:57.502 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:57.502 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.502 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:57.502 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.502 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.502 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:57.502 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:57.502 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:57.502 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:57.502 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.502 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:57.502 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.503 [2024-07-25 14:00:54.380529] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.503 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:32:57.762 14:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:58.329 [2024-07-25 14:00:55.055259] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:58.329 [2024-07-25 14:00:55.055279] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:58.329 [2024-07-25 14:00:55.055291] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:58.329 [2024-07-25 14:00:55.141549] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:58.587 [2024-07-25 14:00:55.280861] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:58.587 [2024-07-25 14:00:55.280880] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:58.851 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:58.852 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.113 [2024-07-25 14:00:55.904777] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:59.113 [2024-07-25 14:00:55.905136] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:59.113 [2024-07-25 14:00:55.905157] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:59.113 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:59.114 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:59.114 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:59.114 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:59.114 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:59.114 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.114 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.114 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:59.114 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:59.114 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.114 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.114 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:59.114 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:59.114 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:59.114 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:59.114 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:59.114 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:59.114 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:59.114 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:59.114 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:59.114 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:59.114 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.114 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:59.114 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.114 14:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.372 14:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:59.372 14:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:59.372 14:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:59.372 14:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:59.372 14:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:59.372 14:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:59.372 14:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:59.372 14:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:59.372 14:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:59.373 14:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.373 14:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.373 14:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:59.373 14:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:59.373 14:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:59.373 14:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.373 [2024-07-25 14:00:56.031536] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:59.373 14:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:59.373 14:00:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:59.373 [2024-07-25 14:00:56.132219] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:59.373 [2024-07-25 14:00:56.132237] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:59.373 [2024-07-25 14:00:56.132243] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:00.309 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:00.309 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:00.309 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:33:00.309 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:00.309 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:00.309 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:00.309 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.309 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.309 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:00.309 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.309 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:00.309 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:00.309 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:00.309 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:00.309 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:00.309 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:00.309 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:00.309 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:00.309 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:00.309 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:00.309 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:00.309 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:00.309 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.309 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.309 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.309 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:00.309 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:00.310 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:00.310 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:00.310 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:00.310 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.310 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.310 [2024-07-25 14:00:57.177131] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:00.310 [2024-07-25 14:00:57.177152] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:00.310 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.310 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:00.310 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:00.310 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:00.310 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:00.310 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:00.310 [2024-07-25 14:00:57.183130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.310 [2024-07-25 14:00:57.183150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.310 [2024-07-25 14:00:57.183161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.310 [2024-07-25 14:00:57.183171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.310 [2024-07-25 14:00:57.183181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.310 [2024-07-25 14:00:57.183191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.310 [2024-07-25 14:00:57.183202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.310 [2024-07-25 14:00:57.183211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.310 [2024-07-25 14:00:57.183220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7c60 is same with the state(5) to be set 00:33:00.310 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:33:00.310 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:00.310 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:00.310 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.310 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.310 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:00.310 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:00.310 [2024-07-25 14:00:57.193144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7c60 (9): Bad file descriptor 00:33:00.569 [2024-07-25 14:00:57.203181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:00.569 [2024-07-25 14:00:57.203493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.569 [2024-07-25 14:00:57.203509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7c60 with addr=10.0.0.2, port=4420 00:33:00.569 [2024-07-25 14:00:57.203519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7c60 is same with the state(5) to be set 00:33:00.569 [2024-07-25 14:00:57.203536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7c60 (9): Bad file descriptor 00:33:00.569 [2024-07-25 14:00:57.203556] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:00.569 [2024-07-25 14:00:57.203565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:00.569 [2024-07-25 14:00:57.203575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:00.569 [2024-07-25 14:00:57.203587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:00.569 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.569 [2024-07-25 14:00:57.213236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:00.569 [2024-07-25 14:00:57.213579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.569 [2024-07-25 14:00:57.213593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7c60 with addr=10.0.0.2, port=4420 00:33:00.569 [2024-07-25 14:00:57.213603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7c60 is same with the state(5) to be set 00:33:00.569 [2024-07-25 14:00:57.213615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7c60 (9): Bad file descriptor 00:33:00.569 [2024-07-25 14:00:57.213634] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:00.569 [2024-07-25 14:00:57.213643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:00.569 [2024-07-25 14:00:57.213652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:00.569 [2024-07-25 14:00:57.213663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:00.569 [2024-07-25 14:00:57.223289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:00.569 [2024-07-25 14:00:57.223637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.569 [2024-07-25 14:00:57.223653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7c60 with addr=10.0.0.2, port=4420 00:33:00.569 [2024-07-25 14:00:57.223663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7c60 is same with the state(5) to be set 00:33:00.569 [2024-07-25 14:00:57.223676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7c60 (9): Bad file descriptor 00:33:00.569 [2024-07-25 14:00:57.223704] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:00.569 [2024-07-25 14:00:57.223718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:00.569 [2024-07-25 14:00:57.223728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:00.569 [2024-07-25 14:00:57.223740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:00.569 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.569 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:00.569 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:00.569 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:00.569 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:00.569 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:00.569 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:00.569 [2024-07-25 14:00:57.233346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:00.569 [2024-07-25 14:00:57.233621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.569 [2024-07-25 14:00:57.233635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7c60 with addr=10.0.0.2, port=4420 00:33:00.569 [2024-07-25 14:00:57.233645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7c60 is same with the state(5) to be set 00:33:00.569 [2024-07-25 14:00:57.233657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7c60 (9): Bad file descriptor 00:33:00.569 [2024-07-25 14:00:57.233669] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:00.569 [2024-07-25 14:00:57.233678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:00.569 [2024-07-25 14:00:57.233687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:00.569 [2024-07-25 14:00:57.233698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:00.569 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:33:00.569 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:00.569 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:00.569 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.569 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.569 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:00.569 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:00.569 [2024-07-25 14:00:57.243399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:00.569 [2024-07-25 14:00:57.243727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.569 [2024-07-25 14:00:57.243744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7c60 with addr=10.0.0.2, port=4420 00:33:00.569 [2024-07-25 14:00:57.243754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7c60 is same with the state(5) to be set 00:33:00.569 [2024-07-25 14:00:57.243768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7c60 (9): Bad file descriptor 00:33:00.569 [2024-07-25 14:00:57.243781] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:00.569 [2024-07-25 14:00:57.243790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:00.569 [2024-07-25 14:00:57.243799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:00.569 [2024-07-25 14:00:57.243811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:00.570 [2024-07-25 14:00:57.253455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:00.570 [2024-07-25 14:00:57.253792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.570 [2024-07-25 14:00:57.253807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7c60 with addr=10.0.0.2, port=4420 00:33:00.570 [2024-07-25 14:00:57.253817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7c60 is same with the state(5) to be set 00:33:00.570 [2024-07-25 14:00:57.253830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7c60 (9): Bad file descriptor 00:33:00.570 [2024-07-25 14:00:57.253843] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:00.570 [2024-07-25 14:00:57.253851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:00.570 [2024-07-25 14:00:57.253864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:00.570 [2024-07-25 14:00:57.253875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:00.570 [2024-07-25 14:00:57.263507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:00.570 [2024-07-25 14:00:57.263686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.570 [2024-07-25 14:00:57.263699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7c60 with addr=10.0.0.2, port=4420 00:33:00.570 [2024-07-25 14:00:57.263708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7c60 is same with the state(5) to be set 00:33:00.570 [2024-07-25 14:00:57.263724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7c60 (9): Bad file descriptor 00:33:00.570 [2024-07-25 14:00:57.263736] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:00.570 [2024-07-25 14:00:57.263745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:00.570 [2024-07-25 14:00:57.263754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:00.570 [2024-07-25 14:00:57.263766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:00.570 [2024-07-25 14:00:57.263969] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:00.570 [2024-07-25 14:00:57.263984] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:33:00.570 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.829 14:00:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:01.765 [2024-07-25 14:00:58.645957] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:01.765 [2024-07-25 14:00:58.645975] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:01.765 [2024-07-25 14:00:58.645987] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:02.023 [2024-07-25 14:00:58.732238] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:02.281 [2024-07-25 14:00:59.042271] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:02.281 [2024-07-25 14:00:59.042297] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.281 request: 00:33:02.281 { 00:33:02.281 "name": "nvme", 00:33:02.281 "trtype": "tcp", 00:33:02.281 "traddr": "10.0.0.2", 00:33:02.281 "adrfam": "ipv4", 00:33:02.281 "trsvcid": "8009", 00:33:02.281 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:02.281 "wait_for_attach": true, 00:33:02.281 "method": "bdev_nvme_start_discovery", 00:33:02.281 "req_id": 1 00:33:02.281 } 00:33:02.281 Got JSON-RPC error response 00:33:02.281 response: 00:33:02.281 { 00:33:02.281 "code": -17, 00:33:02.281 "message": "File exists" 00:33:02.281 } 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.281 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.539 request: 00:33:02.539 { 00:33:02.539 "name": "nvme_second", 00:33:02.539 "trtype": "tcp", 00:33:02.539 "traddr": "10.0.0.2", 00:33:02.539 "adrfam": "ipv4", 00:33:02.539 "trsvcid": "8009", 00:33:02.539 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:02.539 "wait_for_attach": true, 00:33:02.539 "method": "bdev_nvme_start_discovery", 00:33:02.539 "req_id": 1 00:33:02.539 } 00:33:02.539 Got JSON-RPC error response 00:33:02.539 response: 00:33:02.539 { 00:33:02.539 "code": -17, 00:33:02.539 "message": "File exists" 00:33:02.539 } 00:33:02.539 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:02.539 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:33:02.539 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:02.539 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:02.539 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:02.539 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:02.539 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:02.540 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:02.540 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.540 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:02.540 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.540 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:02.540 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.540 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:02.540 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:02.540 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:02.540 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.540 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.540 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:02.540 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:02.540 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:02.540 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.540 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:02.540 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:02.540 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:33:02.540 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:02.540 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:02.540 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:02.540 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:02.540 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:02.540 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:02.540 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.540 14:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:03.476 [2024-07-25 14:01:00.293894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.476 [2024-07-25 14:01:00.293930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1912870 with addr=10.0.0.2, port=8010 00:33:03.476 [2024-07-25 14:01:00.293949] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:03.476 [2024-07-25 14:01:00.293958] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:03.476 [2024-07-25 14:01:00.293967] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:04.410 [2024-07-25 14:01:01.296240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.410 [2024-07-25 14:01:01.296267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1912870 with addr=10.0.0.2, port=8010 00:33:04.410 [2024-07-25 14:01:01.296281] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:04.410 [2024-07-25 14:01:01.296289] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:04.410 [2024-07-25 14:01:01.296297] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:05.786 [2024-07-25 14:01:02.298351] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:05.786 request: 00:33:05.786 { 00:33:05.786 "name": "nvme_second", 00:33:05.786 "trtype": "tcp", 00:33:05.786 "traddr": "10.0.0.2", 00:33:05.786 "adrfam": "ipv4", 00:33:05.786 "trsvcid": "8010", 00:33:05.786 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:05.786 "wait_for_attach": false, 00:33:05.786 "attach_timeout_ms": 3000, 00:33:05.786 "method": "bdev_nvme_start_discovery", 00:33:05.786 "req_id": 1 00:33:05.786 } 00:33:05.786 Got JSON-RPC error response 00:33:05.786 response: 00:33:05.786 { 00:33:05.786 "code": -110, 00:33:05.786 "message": "Connection timed out" 00:33:05.786 } 00:33:05.786 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:05.786 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:33:05.786 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:05.786 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:05.786 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:05.786 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 460477 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:05.787 rmmod nvme_tcp 00:33:05.787 rmmod nvme_fabrics 00:33:05.787 rmmod nvme_keyring 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 460244 ']' 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 460244 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 460244 ']' 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 460244 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 460244 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 460244' 00:33:05.787 killing process with pid 460244 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 460244 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 460244 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:05.787 14:01:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.367 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:08.367 00:33:08.367 real 0m19.200s 00:33:08.367 user 0m22.599s 00:33:08.367 sys 0m7.021s 00:33:08.367 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:08.367 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:08.367 ************************************ 00:33:08.367 END TEST nvmf_host_discovery 00:33:08.367 ************************************ 00:33:08.367 14:01:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:08.367 14:01:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:08.367 14:01:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:08.367 14:01:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.367 ************************************ 00:33:08.367 START TEST nvmf_host_multipath_status 00:33:08.367 ************************************ 00:33:08.367 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:08.367 * Looking for test storage... 00:33:08.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:08.367 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:08.367 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:33:08.368 14:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:14.947 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:14.947 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:14.947 Found net devices under 0000:af:00.0: cvl_0_0 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:14.947 Found net devices under 0000:af:00.1: cvl_0_1 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:14.947 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:14.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:14.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:33:14.948 00:33:14.948 --- 10.0.0.2 ping statistics --- 00:33:14.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.948 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:14.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:14.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:33:14.948 00:33:14.948 --- 10.0.0.1 ping statistics --- 00:33:14.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.948 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=465665 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 465665 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 465665 ']' 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:14.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:14.948 14:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:14.948 [2024-07-25 14:01:11.515436] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:33:14.948 [2024-07-25 14:01:11.515485] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:14.948 EAL: No free 2048 kB hugepages reported on node 1 00:33:14.948 [2024-07-25 14:01:11.555997] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:14.948 [2024-07-25 14:01:11.592056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:14.948 [2024-07-25 14:01:11.630985] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:14.948 [2024-07-25 14:01:11.631026] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:14.948 [2024-07-25 14:01:11.631036] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:14.948 [2024-07-25 14:01:11.631046] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:14.948 [2024-07-25 14:01:11.631054] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:14.948 [2024-07-25 14:01:11.631096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:14.948 [2024-07-25 14:01:11.631099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:15.516 14:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:15.516 14:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:33:15.517 14:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:15.517 14:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:15.517 14:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:15.517 14:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:15.517 14:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=465665 00:33:15.517 14:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:15.776 [2024-07-25 14:01:12.513340] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:15.776 14:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:16.035 Malloc0 00:33:16.035 14:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:16.035 14:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:16.294 14:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:16.553 [2024-07-25 14:01:13.215640] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:16.553 14:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:16.553 [2024-07-25 14:01:13.392091] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:16.553 14:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:16.553 14:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=466084 00:33:16.553 14:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:16.553 14:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 466084 /var/tmp/bdevperf.sock 00:33:16.553 14:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 466084 ']' 00:33:16.553 14:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:16.553 14:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:16.553 14:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:16.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:16.553 14:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:16.553 14:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:16.812 14:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:16.812 14:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:33:16.812 14:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:17.071 14:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:33:17.330 Nvme0n1 00:33:17.330 14:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:17.588 Nvme0n1 00:33:17.588 14:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:17.588 14:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:20.124 14:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:20.124 14:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:20.124 14:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:20.124 14:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:21.061 14:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:21.061 14:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:21.061 14:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.061 14:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:21.320 14:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.320 14:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:21.321 14:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.321 14:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:21.321 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:21.321 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:21.321 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.321 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:21.579 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.579 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:21.579 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.579 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:21.838 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.838 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:21.838 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.838 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:21.838 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.838 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:21.838 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.838 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:22.097 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.097 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:22.097 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:22.357 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:22.615 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:23.553 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:23.553 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:23.553 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.553 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:23.812 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:23.812 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:23.812 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.812 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:23.812 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.812 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:23.812 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.812 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:24.072 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.072 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:24.072 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.072 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:24.331 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.331 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:24.331 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.331 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:24.331 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.331 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:24.331 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.331 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:24.591 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.591 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:24.591 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:24.850 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:24.850 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:26.230 14:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:26.230 14:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:26.230 14:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.230 14:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:26.230 14:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.230 14:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:26.230 14:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.230 14:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:26.230 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:26.230 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:26.230 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.230 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:26.489 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.489 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:26.489 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.489 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:26.817 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.817 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:26.817 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.817 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:26.817 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.817 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:26.817 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.817 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:27.076 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:27.076 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:27.076 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:27.335 14:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:27.594 14:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:28.532 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:28.532 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:28.532 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:28.532 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:28.792 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:28.792 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:28.792 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:28.792 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:28.792 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:28.792 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:28.792 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:28.792 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:29.051 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.051 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:29.051 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.051 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:29.310 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.310 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:29.310 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.310 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:29.310 14:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.310 14:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:29.310 14:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.310 14:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:29.569 14:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:29.569 14:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:29.569 14:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:29.828 14:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:29.828 14:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:31.204 14:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:31.204 14:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:31.204 14:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.204 14:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:31.204 14:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:31.205 14:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:31.205 14:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.205 14:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:31.205 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:31.205 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:31.205 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.205 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:31.463 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:31.463 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:31.463 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.463 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:31.722 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:31.722 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:31.722 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.722 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:31.722 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:31.722 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:31.722 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.722 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:31.981 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:31.981 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:31.981 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:32.239 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:32.239 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:33.615 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:33.615 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:33.615 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.615 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:33.615 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:33.615 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:33.615 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.615 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:33.615 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.615 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:33.615 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.615 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:33.874 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.874 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:33.874 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.874 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:34.133 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:34.133 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:34.133 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:34.133 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:34.133 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:34.133 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:34.133 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:34.133 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:34.392 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:34.392 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:34.650 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:34.650 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:34.909 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:34.909 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:36.286 14:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:36.286 14:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:36.286 14:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.286 14:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:36.286 14:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.286 14:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:36.286 14:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.286 14:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:36.286 14:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.286 14:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:36.286 14:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.286 14:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:36.544 14:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.544 14:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:36.544 14:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.544 14:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:36.803 14:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.803 14:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:36.803 14:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.803 14:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:36.803 14:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.803 14:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:36.803 14:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.803 14:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:37.061 14:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:37.061 14:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:37.061 14:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:37.320 14:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:37.579 14:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:38.514 14:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:38.514 14:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:38.514 14:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.514 14:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:38.774 14:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:38.774 14:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:38.774 14:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.774 14:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:38.774 14:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:38.774 14:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:38.774 14:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.774 14:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:39.032 14:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.032 14:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:39.032 14:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.032 14:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:39.291 14:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.291 14:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:39.291 14:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.291 14:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:39.291 14:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.291 14:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:39.291 14:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:39.291 14:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.550 14:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.550 14:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:39.550 14:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:39.808 14:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:40.067 14:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:41.068 14:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:41.068 14:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:41.068 14:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.068 14:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:41.068 14:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.068 14:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:41.068 14:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.068 14:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:41.327 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.327 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:41.327 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.327 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:41.586 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.586 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:41.586 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.586 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:41.844 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.844 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:41.844 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.844 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:41.844 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.844 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:41.844 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.844 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:42.103 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:42.103 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:42.103 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:42.362 14:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:42.362 14:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:43.738 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:43.738 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:43.738 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.738 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:43.738 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.738 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:43.738 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.738 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:43.738 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:43.738 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:43.738 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:43.738 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.996 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.996 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:43.996 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.996 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:44.255 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:44.255 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:44.255 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.255 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:44.514 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:44.514 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:44.514 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.514 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:44.515 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:44.515 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 466084 00:33:44.515 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 466084 ']' 00:33:44.515 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 466084 00:33:44.515 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:33:44.515 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:44.515 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 466084 00:33:44.515 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:33:44.515 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:33:44.515 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 466084' 00:33:44.515 killing process with pid 466084 00:33:44.515 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 466084 00:33:44.515 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 466084 00:33:44.788 Connection closed with partial response: 00:33:44.788 00:33:44.788 00:33:44.788 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 466084 00:33:44.788 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:44.788 [2024-07-25 14:01:13.444539] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:33:44.788 [2024-07-25 14:01:13.444597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466084 ] 00:33:44.788 EAL: No free 2048 kB hugepages reported on node 1 00:33:44.788 [2024-07-25 14:01:13.480921] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:44.788 [2024-07-25 14:01:13.512360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.788 [2024-07-25 14:01:13.550674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:44.788 Running I/O for 90 seconds... 00:33:44.788 [2024-07-25 14:01:26.498627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.788 [2024-07-25 14:01:26.498671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:44.788 [2024-07-25 14:01:26.498708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.788 [2024-07-25 14:01:26.498727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:44.788 [2024-07-25 14:01:26.498742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.788 [2024-07-25 14:01:26.498752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.788 [2024-07-25 14:01:26.498767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.788 [2024-07-25 14:01:26.498777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.498792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.498801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.498815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.498825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.498840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.498849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.498863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.498873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.498887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.498896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.498911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.498920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.498940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.498950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.498964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.498973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.498988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.498997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.499012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.499021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.499036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.499045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.499061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.499070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.499085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.499094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.499109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.499118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.499133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.499142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.499156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.499165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.499180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.499189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.499203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.499212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.499229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.499238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.499253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.499262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.499276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.499285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.499299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.499308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.499323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.499332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.499346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.499355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.499370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.499379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.499592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.499605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.499622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.499632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.499648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.499657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.499673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.499683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.499700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.499709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.499731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.499742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.499759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.499768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.499784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.499794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.499810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.499820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:44.789 [2024-07-25 14:01:26.499836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.789 [2024-07-25 14:01:26.499845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.499861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.790 [2024-07-25 14:01:26.499870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.499887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.790 [2024-07-25 14:01:26.499896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.499912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.790 [2024-07-25 14:01:26.499921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.499938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.790 [2024-07-25 14:01:26.499947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.499963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.790 [2024-07-25 14:01:26.499973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.499989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.790 [2024-07-25 14:01:26.499998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.500014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.790 [2024-07-25 14:01:26.500023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.500040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.790 [2024-07-25 14:01:26.500051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.500067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.790 [2024-07-25 14:01:26.500076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.500092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.790 [2024-07-25 14:01:26.500102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.500119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.790 [2024-07-25 14:01:26.500128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.500144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.790 [2024-07-25 14:01:26.500154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.500558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.790 [2024-07-25 14:01:26.500568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.500586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.790 [2024-07-25 14:01:26.500596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.500613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.790 [2024-07-25 14:01:26.500623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.500640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.790 [2024-07-25 14:01:26.500649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.500666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.790 [2024-07-25 14:01:26.500675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.500692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.790 [2024-07-25 14:01:26.500702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.500723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.790 [2024-07-25 14:01:26.500733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.500750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.790 [2024-07-25 14:01:26.500760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.500779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.790 [2024-07-25 14:01:26.500788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.500805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.790 [2024-07-25 14:01:26.500814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.500832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.790 [2024-07-25 14:01:26.500841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.500858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.790 [2024-07-25 14:01:26.500867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.500884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.790 [2024-07-25 14:01:26.500894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.500911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.790 [2024-07-25 14:01:26.500920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.500938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.790 [2024-07-25 14:01:26.500948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.500965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.790 [2024-07-25 14:01:26.500974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.500991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.790 [2024-07-25 14:01:26.501000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.501018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.790 [2024-07-25 14:01:26.501027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.501044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.790 [2024-07-25 14:01:26.501053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.501070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.790 [2024-07-25 14:01:26.501079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:44.790 [2024-07-25 14:01:26.501098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.501976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.501996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.502005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.502027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.502036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.502058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.502067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.502088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.502097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.502118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.502127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.502148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.502157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.502177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.502187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.502208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.502217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.502238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.502247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.502272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.502281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.502301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.791 [2024-07-25 14:01:26.502311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:44.791 [2024-07-25 14:01:26.502332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:26.502342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:26.502363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:26.502372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.229947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.792 [2024-07-25 14:01:39.229989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.230012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.792 [2024-07-25 14:01:39.230022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.230038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.792 [2024-07-25 14:01:39.230047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.230063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.230072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.230086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.230096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.230110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.792 [2024-07-25 14:01:39.230120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.230450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.230461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.230476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.230485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.230504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.230513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.230528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.230537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.230552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.230561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.230576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.230586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.230600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.230610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.230625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.230634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.230649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.230658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.230672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.230681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.230696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.230705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.230726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.230735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.230750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.230759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.230775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.230784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.230799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.230810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.230825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.230834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.230848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.230857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.230872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.230881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.230896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.230905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.230920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.230929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.231661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.231681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.231698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.231707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.231728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.231738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.231753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.792 [2024-07-25 14:01:39.231762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.231776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.231785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.231800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.231809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.231824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.231836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.231851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.231860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.231875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.231884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.231898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.231908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:44.792 [2024-07-25 14:01:39.231923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.792 [2024-07-25 14:01:39.231932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.231947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.793 [2024-07-25 14:01:39.231956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.231971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.793 [2024-07-25 14:01:39.231980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.231994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.793 [2024-07-25 14:01:39.232003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.232018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.793 [2024-07-25 14:01:39.232027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.232041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.793 [2024-07-25 14:01:39.232051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.232065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.793 [2024-07-25 14:01:39.232074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.232088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.793 [2024-07-25 14:01:39.232098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.232113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.793 [2024-07-25 14:01:39.232124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.232138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.793 [2024-07-25 14:01:39.232148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.232162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.793 [2024-07-25 14:01:39.232172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.232186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.793 [2024-07-25 14:01:39.232195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.232209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.793 [2024-07-25 14:01:39.232219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.232233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.793 [2024-07-25 14:01:39.232242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.232257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.793 [2024-07-25 14:01:39.232266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.232280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.793 [2024-07-25 14:01:39.232290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.232304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.793 [2024-07-25 14:01:39.232313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.232327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.793 [2024-07-25 14:01:39.232336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.232351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.793 [2024-07-25 14:01:39.232361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.232376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.793 [2024-07-25 14:01:39.232385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.232399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.793 [2024-07-25 14:01:39.232408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.232424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.793 [2024-07-25 14:01:39.232433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.232448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.793 [2024-07-25 14:01:39.232457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.233358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.793 [2024-07-25 14:01:39.233375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.233392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.793 [2024-07-25 14:01:39.233401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.233416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.793 [2024-07-25 14:01:39.233425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.233440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.793 [2024-07-25 14:01:39.233449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.233464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.793 [2024-07-25 14:01:39.233472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.233487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.793 [2024-07-25 14:01:39.233496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.233511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.793 [2024-07-25 14:01:39.233520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:44.793 [2024-07-25 14:01:39.233535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.794 [2024-07-25 14:01:39.233544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.233559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.794 [2024-07-25 14:01:39.233568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.233582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.794 [2024-07-25 14:01:39.233591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.233608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.794 [2024-07-25 14:01:39.233618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.233633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.794 [2024-07-25 14:01:39.233642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.233656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.794 [2024-07-25 14:01:39.233666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.233680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.794 [2024-07-25 14:01:39.233689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.233703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.794 [2024-07-25 14:01:39.233712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.233732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.794 [2024-07-25 14:01:39.233742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.233756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.794 [2024-07-25 14:01:39.233765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.233779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.794 [2024-07-25 14:01:39.233789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.233803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.794 [2024-07-25 14:01:39.233812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.233827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.794 [2024-07-25 14:01:39.233836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.233850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.794 [2024-07-25 14:01:39.233859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.233874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.794 [2024-07-25 14:01:39.233884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.234257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.794 [2024-07-25 14:01:39.234274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.234291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.794 [2024-07-25 14:01:39.234300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.234315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.794 [2024-07-25 14:01:39.234324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.234339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.794 [2024-07-25 14:01:39.234348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.234362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.794 [2024-07-25 14:01:39.234372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.234386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.794 [2024-07-25 14:01:39.234395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.234410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.794 [2024-07-25 14:01:39.234419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.234433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.794 [2024-07-25 14:01:39.234442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.234457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.794 [2024-07-25 14:01:39.234466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.234480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.794 [2024-07-25 14:01:39.234490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.234504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.794 [2024-07-25 14:01:39.234513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.234527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.794 [2024-07-25 14:01:39.234537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.234551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.794 [2024-07-25 14:01:39.234563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.234577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.794 [2024-07-25 14:01:39.234586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.234601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.794 [2024-07-25 14:01:39.234610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.234625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.794 [2024-07-25 14:01:39.234634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.234649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.794 [2024-07-25 14:01:39.234658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.234672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.794 [2024-07-25 14:01:39.234682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.234696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.794 [2024-07-25 14:01:39.234705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.234725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.794 [2024-07-25 14:01:39.234734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.234749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.794 [2024-07-25 14:01:39.234758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.234773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.794 [2024-07-25 14:01:39.234782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:44.794 [2024-07-25 14:01:39.234796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.794 [2024-07-25 14:01:39.234806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.234820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.795 [2024-07-25 14:01:39.234829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.234843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.795 [2024-07-25 14:01:39.234852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.234869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.795 [2024-07-25 14:01:39.234878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.234893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.795 [2024-07-25 14:01:39.234902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.234917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.795 [2024-07-25 14:01:39.234926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.234941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.795 [2024-07-25 14:01:39.234950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.234964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.795 [2024-07-25 14:01:39.234973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.234988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.795 [2024-07-25 14:01:39.234997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.235012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.795 [2024-07-25 14:01:39.235021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.235464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.795 [2024-07-25 14:01:39.235480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.235496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.795 [2024-07-25 14:01:39.235506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.235520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:129032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.795 [2024-07-25 14:01:39.235530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.235544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.795 [2024-07-25 14:01:39.235554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.235568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.795 [2024-07-25 14:01:39.235578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.235594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.795 [2024-07-25 14:01:39.235604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.235618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.795 [2024-07-25 14:01:39.235628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.235643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:129192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.795 [2024-07-25 14:01:39.235652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.235667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.795 [2024-07-25 14:01:39.235676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.235691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:129224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.795 [2024-07-25 14:01:39.235700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.235720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.795 [2024-07-25 14:01:39.235729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.235744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.795 [2024-07-25 14:01:39.235753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.235767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.795 [2024-07-25 14:01:39.235776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.235791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.795 [2024-07-25 14:01:39.235800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.235814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.795 [2024-07-25 14:01:39.235824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.235838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.795 [2024-07-25 14:01:39.235847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.235862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.795 [2024-07-25 14:01:39.235871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.235887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.795 [2024-07-25 14:01:39.235896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.235911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.795 [2024-07-25 14:01:39.235920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.235935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.795 [2024-07-25 14:01:39.235944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.235958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.795 [2024-07-25 14:01:39.235967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.235982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.795 [2024-07-25 14:01:39.235991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.236005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.795 [2024-07-25 14:01:39.236014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.236030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.795 [2024-07-25 14:01:39.236040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.236055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.795 [2024-07-25 14:01:39.236064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.236079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.795 [2024-07-25 14:01:39.236088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.236103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.795 [2024-07-25 14:01:39.236112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:44.795 [2024-07-25 14:01:39.236127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.796 [2024-07-25 14:01:39.236136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.236551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.796 [2024-07-25 14:01:39.236565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.236581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.796 [2024-07-25 14:01:39.236593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.236608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.796 [2024-07-25 14:01:39.236617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.236632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.796 [2024-07-25 14:01:39.236644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.236659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.796 [2024-07-25 14:01:39.236668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.236683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.796 [2024-07-25 14:01:39.236692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.236707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.796 [2024-07-25 14:01:39.236721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.236736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.796 [2024-07-25 14:01:39.236745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.236760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.796 [2024-07-25 14:01:39.236769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.236783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.796 [2024-07-25 14:01:39.236793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.236807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.796 [2024-07-25 14:01:39.236817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.236832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.796 [2024-07-25 14:01:39.236841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.236856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.796 [2024-07-25 14:01:39.236865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.236879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.796 [2024-07-25 14:01:39.236890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.236905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.796 [2024-07-25 14:01:39.236914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.236929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.796 [2024-07-25 14:01:39.236938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.236953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.796 [2024-07-25 14:01:39.236962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.236976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.796 [2024-07-25 14:01:39.236985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.237000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.796 [2024-07-25 14:01:39.237009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.237024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.796 [2024-07-25 14:01:39.237035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.237303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.796 [2024-07-25 14:01:39.237316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.237332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.796 [2024-07-25 14:01:39.237342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.237356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.796 [2024-07-25 14:01:39.237365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.237380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.796 [2024-07-25 14:01:39.237389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.237404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.796 [2024-07-25 14:01:39.237413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.237427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.796 [2024-07-25 14:01:39.237439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.237454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:129192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.796 [2024-07-25 14:01:39.237463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.237478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.796 [2024-07-25 14:01:39.237487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.237501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.796 [2024-07-25 14:01:39.237511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.237526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.796 [2024-07-25 14:01:39.237535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.237550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.796 [2024-07-25 14:01:39.237559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.237574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.796 [2024-07-25 14:01:39.237583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.237597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.796 [2024-07-25 14:01:39.237607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.237621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.796 [2024-07-25 14:01:39.237630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.237645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.796 [2024-07-25 14:01:39.237654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.237668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.796 [2024-07-25 14:01:39.237678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.237694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.796 [2024-07-25 14:01:39.237703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:44.796 [2024-07-25 14:01:39.238598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.797 [2024-07-25 14:01:39.238616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.238639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.797 [2024-07-25 14:01:39.238649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.238664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.797 [2024-07-25 14:01:39.238673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.238688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.797 [2024-07-25 14:01:39.238698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.238713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.797 [2024-07-25 14:01:39.238729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.238743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.797 [2024-07-25 14:01:39.238752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.238767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.797 [2024-07-25 14:01:39.238776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.238791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.797 [2024-07-25 14:01:39.238800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.238814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.797 [2024-07-25 14:01:39.238823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.238838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.797 [2024-07-25 14:01:39.238848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.238863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.797 [2024-07-25 14:01:39.238872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.238886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.797 [2024-07-25 14:01:39.238896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.238910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.797 [2024-07-25 14:01:39.238919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.238935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.797 [2024-07-25 14:01:39.238945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.238960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.797 [2024-07-25 14:01:39.238971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.238985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.797 [2024-07-25 14:01:39.238995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.239009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.797 [2024-07-25 14:01:39.239018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.239033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:129224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.797 [2024-07-25 14:01:39.239042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.239057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.797 [2024-07-25 14:01:39.239066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.239080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.797 [2024-07-25 14:01:39.239090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.239104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.797 [2024-07-25 14:01:39.239113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.239128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.797 [2024-07-25 14:01:39.239137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.240481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.797 [2024-07-25 14:01:39.240501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.240519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.797 [2024-07-25 14:01:39.240528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.240543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.797 [2024-07-25 14:01:39.240553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.240567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.797 [2024-07-25 14:01:39.240580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.240594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.797 [2024-07-25 14:01:39.240604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.240618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.797 [2024-07-25 14:01:39.240628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.240642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.797 [2024-07-25 14:01:39.240651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.240666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.797 [2024-07-25 14:01:39.240675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.240690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.797 [2024-07-25 14:01:39.240700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.240720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.797 [2024-07-25 14:01:39.240730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.240965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.797 [2024-07-25 14:01:39.240979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.240994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.797 [2024-07-25 14:01:39.241004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.241018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.797 [2024-07-25 14:01:39.241027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:44.797 [2024-07-25 14:01:39.241043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.798 [2024-07-25 14:01:39.241052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:44.798 [2024-07-25 14:01:39.241066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:129384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.798 [2024-07-25 14:01:39.241075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:44.798 [2024-07-25 14:01:39.241090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.798 [2024-07-25 14:01:39.241102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:44.798 [2024-07-25 14:01:39.241117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.798 [2024-07-25 14:01:39.241126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:44.798 [2024-07-25 14:01:39.241140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.798 [2024-07-25 14:01:39.241149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:44.798 [2024-07-25 14:01:39.241164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.798 [2024-07-25 14:01:39.241173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:44.798 [2024-07-25 14:01:39.241187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.798 [2024-07-25 14:01:39.241196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:44.798 [2024-07-25 14:01:39.241211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.798 [2024-07-25 14:01:39.241220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:44.798 [2024-07-25 14:01:39.241234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.798 [2024-07-25 14:01:39.241243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.798 [2024-07-25 14:01:39.241257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.798 [2024-07-25 14:01:39.241267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.798 [2024-07-25 14:01:39.241281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.798 [2024-07-25 14:01:39.241290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:44.798 [2024-07-25 14:01:39.241304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.798 [2024-07-25 14:01:39.241314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:44.798 [2024-07-25 14:01:39.241328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.798 [2024-07-25 14:01:39.241338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:44.798 [2024-07-25 14:01:39.241352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.798 [2024-07-25 14:01:39.241361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:44.798 [2024-07-25 14:01:39.241376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.798 [2024-07-25 14:01:39.241387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:44.798 [2024-07-25 14:01:39.241401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.798 [2024-07-25 14:01:39.241410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:44.798 [2024-07-25 14:01:39.250469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.798 [2024-07-25 14:01:39.250482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:44.798 [2024-07-25 14:01:39.250497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.798 [2024-07-25 14:01:39.250506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:44.798 [2024-07-25 14:01:39.250520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.798 [2024-07-25 14:01:39.250529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:44.798 [2024-07-25 14:01:39.250544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.798 [2024-07-25 14:01:39.250553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:44.798 [2024-07-25 14:01:39.250567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.798 [2024-07-25 14:01:39.250577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:44.798 [2024-07-25 14:01:39.250918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:129656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.798 [2024-07-25 14:01:39.250933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:44.798 [2024-07-25 14:01:39.250950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.798 [2024-07-25 14:01:39.250959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.250974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.799 [2024-07-25 14:01:39.250983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.250998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.799 [2024-07-25 14:01:39.251007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.251021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.799 [2024-07-25 14:01:39.251030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.251045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.799 [2024-07-25 14:01:39.251054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.251072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.799 [2024-07-25 14:01:39.251081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.251095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.799 [2024-07-25 14:01:39.251104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.251118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.799 [2024-07-25 14:01:39.251128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.251142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.799 [2024-07-25 14:01:39.251151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.251165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.799 [2024-07-25 14:01:39.251175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.251189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.799 [2024-07-25 14:01:39.251198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.251212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.799 [2024-07-25 14:01:39.251221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.251236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.799 [2024-07-25 14:01:39.251245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.251259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.799 [2024-07-25 14:01:39.251268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.252790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.799 [2024-07-25 14:01:39.252809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.252825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.799 [2024-07-25 14:01:39.252836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.252851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.799 [2024-07-25 14:01:39.252860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.252877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.799 [2024-07-25 14:01:39.252887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.252901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.799 [2024-07-25 14:01:39.252910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.252925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.799 [2024-07-25 14:01:39.252934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.252948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.799 [2024-07-25 14:01:39.252957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.252972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:129224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.799 [2024-07-25 14:01:39.252981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.252996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.799 [2024-07-25 14:01:39.253006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.253020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.799 [2024-07-25 14:01:39.253029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.253043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:129576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.799 [2024-07-25 14:01:39.253052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.253067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.799 [2024-07-25 14:01:39.253076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.253090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.799 [2024-07-25 14:01:39.253099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.253114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.799 [2024-07-25 14:01:39.253122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.253137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.799 [2024-07-25 14:01:39.253146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.253160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.799 [2024-07-25 14:01:39.253171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.253186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.799 [2024-07-25 14:01:39.253195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.253209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.799 [2024-07-25 14:01:39.253219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.253233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.799 [2024-07-25 14:01:39.253242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.253257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.799 [2024-07-25 14:01:39.253266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.253280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.799 [2024-07-25 14:01:39.253290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.254151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:129336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.799 [2024-07-25 14:01:39.254167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.254183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.799 [2024-07-25 14:01:39.254192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.254208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.799 [2024-07-25 14:01:39.254217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:44.799 [2024-07-25 14:01:39.254232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.799 [2024-07-25 14:01:39.254241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.254255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.800 [2024-07-25 14:01:39.254264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.254279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.800 [2024-07-25 14:01:39.254288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.254302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.800 [2024-07-25 14:01:39.254314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.254329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.800 [2024-07-25 14:01:39.254338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.254352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.800 [2024-07-25 14:01:39.254361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.254376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.800 [2024-07-25 14:01:39.254385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.254399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.800 [2024-07-25 14:01:39.254408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.254423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.800 [2024-07-25 14:01:39.254432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.254446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.800 [2024-07-25 14:01:39.254455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.254470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.800 [2024-07-25 14:01:39.254479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.254493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.800 [2024-07-25 14:01:39.254502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.254517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.800 [2024-07-25 14:01:39.254525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.254540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.800 [2024-07-25 14:01:39.254549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.254563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.800 [2024-07-25 14:01:39.254572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.254587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.800 [2024-07-25 14:01:39.254606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.254621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.800 [2024-07-25 14:01:39.254630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.254644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:129488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.800 [2024-07-25 14:01:39.254654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.254668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.800 [2024-07-25 14:01:39.254677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.255286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.800 [2024-07-25 14:01:39.255302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.255318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.800 [2024-07-25 14:01:39.255328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.255343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.800 [2024-07-25 14:01:39.255352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.255367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.800 [2024-07-25 14:01:39.255376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.255390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.800 [2024-07-25 14:01:39.255400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.255414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.800 [2024-07-25 14:01:39.255423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.255438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.800 [2024-07-25 14:01:39.255447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.255461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.800 [2024-07-25 14:01:39.255470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.255485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.800 [2024-07-25 14:01:39.255494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.255511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.800 [2024-07-25 14:01:39.255521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.255535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.800 [2024-07-25 14:01:39.255544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.256474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:129264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.800 [2024-07-25 14:01:39.256490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.256507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:129552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.800 [2024-07-25 14:01:39.256516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.256531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:129584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.800 [2024-07-25 14:01:39.256540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.256555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:129616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.800 [2024-07-25 14:01:39.256564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.256579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:130008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.800 [2024-07-25 14:01:39.256588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.256603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:130024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.800 [2024-07-25 14:01:39.256612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.256626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.800 [2024-07-25 14:01:39.256636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.256650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:130056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.800 [2024-07-25 14:01:39.256659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.256674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:130072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.800 [2024-07-25 14:01:39.256683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:44.800 [2024-07-25 14:01:39.256697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:130088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.801 [2024-07-25 14:01:39.256707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.256729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:130104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.801 [2024-07-25 14:01:39.256738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.256753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.801 [2024-07-25 14:01:39.256762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.256776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.801 [2024-07-25 14:01:39.256785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.256800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.801 [2024-07-25 14:01:39.256809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.256823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.801 [2024-07-25 14:01:39.256833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.256847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.801 [2024-07-25 14:01:39.256856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.256870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.801 [2024-07-25 14:01:39.256880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.256894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.801 [2024-07-25 14:01:39.256903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.256918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.801 [2024-07-25 14:01:39.256927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.256941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.801 [2024-07-25 14:01:39.256950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.256965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.801 [2024-07-25 14:01:39.256974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.256988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.801 [2024-07-25 14:01:39.256997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.257384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:130000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.801 [2024-07-25 14:01:39.257400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.257416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.801 [2024-07-25 14:01:39.257425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.257439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:129520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.801 [2024-07-25 14:01:39.257449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.257463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.801 [2024-07-25 14:01:39.257473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.257487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:130120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.801 [2024-07-25 14:01:39.257496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.257510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:130136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.801 [2024-07-25 14:01:39.257519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.257534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:130152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.801 [2024-07-25 14:01:39.257543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.257557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:130168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.801 [2024-07-25 14:01:39.257567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.257581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.801 [2024-07-25 14:01:39.257591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.257605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.801 [2024-07-25 14:01:39.257614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.257629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.801 [2024-07-25 14:01:39.257638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.257653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.801 [2024-07-25 14:01:39.257662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.257677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.801 [2024-07-25 14:01:39.257688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.257703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.801 [2024-07-25 14:01:39.257712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.258830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:129560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.801 [2024-07-25 14:01:39.258847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.258863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.801 [2024-07-25 14:01:39.258873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.258887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:129688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.801 [2024-07-25 14:01:39.258897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.258911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:129432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.801 [2024-07-25 14:01:39.258921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.258935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:130176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.801 [2024-07-25 14:01:39.258944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.258959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:130192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.801 [2024-07-25 14:01:39.258968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.258982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:130208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.801 [2024-07-25 14:01:39.258991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.259005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:130224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.801 [2024-07-25 14:01:39.259014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.259029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:130240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.801 [2024-07-25 14:01:39.259038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.259052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:130256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.801 [2024-07-25 14:01:39.259061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:44.801 [2024-07-25 14:01:39.259076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.801 [2024-07-25 14:01:39.259087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.259102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.802 [2024-07-25 14:01:39.259111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.259126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:129800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.802 [2024-07-25 14:01:39.259135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.259150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:129832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.802 [2024-07-25 14:01:39.259159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.259173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.802 [2024-07-25 14:01:39.259182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.259197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:129896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.802 [2024-07-25 14:01:39.259206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.259220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:129928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.802 [2024-07-25 14:01:39.259229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.259244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:129960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.802 [2024-07-25 14:01:39.259253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.259268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:130280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.802 [2024-07-25 14:01:39.259277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.259292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:129552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.802 [2024-07-25 14:01:39.259301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.259315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.802 [2024-07-25 14:01:39.259324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.259338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:130024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.802 [2024-07-25 14:01:39.259347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.259362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.802 [2024-07-25 14:01:39.259371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.259389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:130088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.802 [2024-07-25 14:01:39.259399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.259413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.802 [2024-07-25 14:01:39.259422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.259437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:129712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.802 [2024-07-25 14:01:39.259446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.259460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.802 [2024-07-25 14:01:39.259469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.259484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.802 [2024-07-25 14:01:39.259493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.259507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.802 [2024-07-25 14:01:39.259516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.259530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.802 [2024-07-25 14:01:39.259540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.259554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.802 [2024-07-25 14:01:39.259563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.259578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.802 [2024-07-25 14:01:39.259587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.259601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.802 [2024-07-25 14:01:39.259610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.259625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.802 [2024-07-25 14:01:39.259634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.260384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:130136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.802 [2024-07-25 14:01:39.260400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.260420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:130168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.802 [2024-07-25 14:01:39.260430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.260444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.802 [2024-07-25 14:01:39.260453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.260467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.802 [2024-07-25 14:01:39.260477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.260491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.802 [2024-07-25 14:01:39.260500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.260515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.802 [2024-07-25 14:01:39.260524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.260538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:130296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.802 [2024-07-25 14:01:39.260548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.260562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:130312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.802 [2024-07-25 14:01:39.260571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.260586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:130328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.802 [2024-07-25 14:01:39.260594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.260609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:130344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.802 [2024-07-25 14:01:39.260618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.260632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:130360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.802 [2024-07-25 14:01:39.260641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.260656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.802 [2024-07-25 14:01:39.260665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.260679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.802 [2024-07-25 14:01:39.260688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.260705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.802 [2024-07-25 14:01:39.260719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.261179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:129760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.802 [2024-07-25 14:01:39.261194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:44.802 [2024-07-25 14:01:39.261210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.802 [2024-07-25 14:01:39.261220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.261234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.803 [2024-07-25 14:01:39.261244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.261258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:129952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.803 [2024-07-25 14:01:39.261267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.261282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:130368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.803 [2024-07-25 14:01:39.261291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.261305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:130384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.803 [2024-07-25 14:01:39.261314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.261329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:130400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.803 [2024-07-25 14:01:39.261338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.261352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:130416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.803 [2024-07-25 14:01:39.261361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.261376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.803 [2024-07-25 14:01:39.261385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.261399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.803 [2024-07-25 14:01:39.261409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.261423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.803 [2024-07-25 14:01:39.261432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.261447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.803 [2024-07-25 14:01:39.261459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.261473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:130192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.803 [2024-07-25 14:01:39.261482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.261497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:130224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.803 [2024-07-25 14:01:39.261507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.261521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:130256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.803 [2024-07-25 14:01:39.261530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.261545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.803 [2024-07-25 14:01:39.261554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.261568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.803 [2024-07-25 14:01:39.261577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.261592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.803 [2024-07-25 14:01:39.261601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.261615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:129960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.803 [2024-07-25 14:01:39.261625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.261639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.803 [2024-07-25 14:01:39.261648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.262546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:130024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.803 [2024-07-25 14:01:39.262564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.262580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.803 [2024-07-25 14:01:39.262589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.262604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:129712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.803 [2024-07-25 14:01:39.262613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.262627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.803 [2024-07-25 14:01:39.262640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.262654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.803 [2024-07-25 14:01:39.262663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.262678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:129640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.803 [2024-07-25 14:01:39.262687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.262701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:129448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.803 [2024-07-25 14:01:39.262711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.262731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.803 [2024-07-25 14:01:39.262740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.262754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:130440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.803 [2024-07-25 14:01:39.262763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.262778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:130456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.803 [2024-07-25 14:01:39.262787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.262801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:130472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.803 [2024-07-25 14:01:39.262811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.262825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:130488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.803 [2024-07-25 14:01:39.262834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.262849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.803 [2024-07-25 14:01:39.262858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.262872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.803 [2024-07-25 14:01:39.262881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:44.803 [2024-07-25 14:01:39.262896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:130248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.803 [2024-07-25 14:01:39.262905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.262920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:130168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.804 [2024-07-25 14:01:39.262929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.262945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.804 [2024-07-25 14:01:39.262954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.262969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.804 [2024-07-25 14:01:39.262978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.262992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:130312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.804 [2024-07-25 14:01:39.263002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.263016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:130344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.804 [2024-07-25 14:01:39.263025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.263040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:130032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.804 [2024-07-25 14:01:39.263049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.263063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:130096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.804 [2024-07-25 14:01:39.263072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.263731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.804 [2024-07-25 14:01:39.263748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.263765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.804 [2024-07-25 14:01:39.263774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.263789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:129808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.804 [2024-07-25 14:01:39.263798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.263813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.804 [2024-07-25 14:01:39.263822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.263837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:130120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.804 [2024-07-25 14:01:39.263846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.263861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.804 [2024-07-25 14:01:39.263870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.263887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:129952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.804 [2024-07-25 14:01:39.263896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.263911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:130384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.804 [2024-07-25 14:01:39.263920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.263934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:130416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.804 [2024-07-25 14:01:39.263943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.263958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.804 [2024-07-25 14:01:39.263967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.263981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.804 [2024-07-25 14:01:39.263990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.264005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:130224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.804 [2024-07-25 14:01:39.264014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.264028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.804 [2024-07-25 14:01:39.264038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.264052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.804 [2024-07-25 14:01:39.264061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.264076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.804 [2024-07-25 14:01:39.264085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.264922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.804 [2024-07-25 14:01:39.264940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.264956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.804 [2024-07-25 14:01:39.264965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.264980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:130504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.804 [2024-07-25 14:01:39.264989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.265007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:130520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.804 [2024-07-25 14:01:39.265016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.265030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:130536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.804 [2024-07-25 14:01:39.265039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.265054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:130552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.804 [2024-07-25 14:01:39.265063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.265077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:130568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.804 [2024-07-25 14:01:39.265087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.265101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:130584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.804 [2024-07-25 14:01:39.265110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.265125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:130600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.804 [2024-07-25 14:01:39.265135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.265149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.804 [2024-07-25 14:01:39.265158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.265173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:130632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.804 [2024-07-25 14:01:39.265182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.265196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:130648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.804 [2024-07-25 14:01:39.265205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.265220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:130088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.804 [2024-07-25 14:01:39.265229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.265244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.804 [2024-07-25 14:01:39.265253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.265267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:129640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.804 [2024-07-25 14:01:39.265277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:44.804 [2024-07-25 14:01:39.265291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.804 [2024-07-25 14:01:39.265302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.265317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:130456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.805 [2024-07-25 14:01:39.265326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.265340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:130488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.805 [2024-07-25 14:01:39.265350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.265364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.805 [2024-07-25 14:01:39.265373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.265388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:130168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.805 [2024-07-25 14:01:39.265397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.265411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.805 [2024-07-25 14:01:39.265421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.265435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.805 [2024-07-25 14:01:39.265444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.265459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.805 [2024-07-25 14:01:39.265468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.265482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.805 [2024-07-25 14:01:39.265492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.265506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.805 [2024-07-25 14:01:39.265515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.265530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.805 [2024-07-25 14:01:39.265539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.265553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:130664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.805 [2024-07-25 14:01:39.265563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.265577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:130680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.805 [2024-07-25 14:01:39.265588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.265602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:130696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.805 [2024-07-25 14:01:39.265612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.265626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:130712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.805 [2024-07-25 14:01:39.265635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.265650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.805 [2024-07-25 14:01:39.265659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.265673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:130104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.805 [2024-07-25 14:01:39.265683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.265697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.805 [2024-07-25 14:01:39.265706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.265726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.805 [2024-07-25 14:01:39.265736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.265750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:130384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.805 [2024-07-25 14:01:39.265759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.265774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:130160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.805 [2024-07-25 14:01:39.265783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.265798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:130224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.805 [2024-07-25 14:01:39.265807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.265822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.805 [2024-07-25 14:01:39.265831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.267305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.805 [2024-07-25 14:01:39.267324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.267341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:129904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.805 [2024-07-25 14:01:39.267350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.267368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:130736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.805 [2024-07-25 14:01:39.267377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.267391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.805 [2024-07-25 14:01:39.267403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.267418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.805 [2024-07-25 14:01:39.267427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.267441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.805 [2024-07-25 14:01:39.267450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.267465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.805 [2024-07-25 14:01:39.267474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.267488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.805 [2024-07-25 14:01:39.267498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.267512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:129528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.805 [2024-07-25 14:01:39.267522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.267536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.805 [2024-07-25 14:01:39.267545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.267560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:130808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.805 [2024-07-25 14:01:39.267569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.267583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:130824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.805 [2024-07-25 14:01:39.267593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.267607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:129704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.805 [2024-07-25 14:01:39.267616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.267631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.805 [2024-07-25 14:01:39.267640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.267656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.805 [2024-07-25 14:01:39.267665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.267680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:130520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.805 [2024-07-25 14:01:39.267690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:44.805 [2024-07-25 14:01:39.267987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:130552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.806 [2024-07-25 14:01:39.268000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.268016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:130584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.806 [2024-07-25 14:01:39.268025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.268040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:130616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.806 [2024-07-25 14:01:39.268049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.268063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:130648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.806 [2024-07-25 14:01:39.268072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.268087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.806 [2024-07-25 14:01:39.268096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.268110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.806 [2024-07-25 14:01:39.268120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.268134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:130488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.806 [2024-07-25 14:01:39.268143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.268158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:130168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.806 [2024-07-25 14:01:39.268166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.268181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:130344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.806 [2024-07-25 14:01:39.268191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.268205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:130392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.806 [2024-07-25 14:01:39.268214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.268231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.806 [2024-07-25 14:01:39.268240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:130680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.806 [2024-07-25 14:01:39.269143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:130712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.806 [2024-07-25 14:01:39.269169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.806 [2024-07-25 14:01:39.269193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.806 [2024-07-25 14:01:39.269217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.806 [2024-07-25 14:01:39.269241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:129896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.806 [2024-07-25 14:01:39.269264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.806 [2024-07-25 14:01:39.269288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.806 [2024-07-25 14:01:39.269312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.806 [2024-07-25 14:01:39.269336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.806 [2024-07-25 14:01:39.269359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.806 [2024-07-25 14:01:39.269383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.806 [2024-07-25 14:01:39.269410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:130592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.806 [2024-07-25 14:01:39.269433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.806 [2024-07-25 14:01:39.269457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.806 [2024-07-25 14:01:39.269481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.806 [2024-07-25 14:01:39.269504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.806 [2024-07-25 14:01:39.269528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.806 [2024-07-25 14:01:39.269551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.806 [2024-07-25 14:01:39.269575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.806 [2024-07-25 14:01:39.269599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.806 [2024-07-25 14:01:39.269623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.806 [2024-07-25 14:01:39.269647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.806 [2024-07-25 14:01:39.269671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:129544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.806 [2024-07-25 14:01:39.269696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.806 [2024-07-25 14:01:39.269725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.806 [2024-07-25 14:01:39.269748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.806 [2024-07-25 14:01:39.269772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:44.806 [2024-07-25 14:01:39.269786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.806 [2024-07-25 14:01:39.269796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.269810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.807 [2024-07-25 14:01:39.269819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.269833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.807 [2024-07-25 14:01:39.269843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.269857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:130792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.807 [2024-07-25 14:01:39.269866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.269881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:130824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.807 [2024-07-25 14:01:39.269890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.269904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.807 [2024-07-25 14:01:39.269913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.269928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:130520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.807 [2024-07-25 14:01:39.269937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.269952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.807 [2024-07-25 14:01:39.269961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.269976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.807 [2024-07-25 14:01:39.269985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.270003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:130584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.807 [2024-07-25 14:01:39.270013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.270027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:130648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.807 [2024-07-25 14:01:39.270036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.270051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.807 [2024-07-25 14:01:39.270060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.270074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:130168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.807 [2024-07-25 14:01:39.270083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.270098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.807 [2024-07-25 14:01:39.270107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.272115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.807 [2024-07-25 14:01:39.272135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.272153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.807 [2024-07-25 14:01:39.272162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.272177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.807 [2024-07-25 14:01:39.272186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.272208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.807 [2024-07-25 14:01:39.272218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.272232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.807 [2024-07-25 14:01:39.272241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.272255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.807 [2024-07-25 14:01:39.272264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.272279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.807 [2024-07-25 14:01:39.272288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.272305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.807 [2024-07-25 14:01:39.272315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.272329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.807 [2024-07-25 14:01:39.272338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.272353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.807 [2024-07-25 14:01:39.272362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.272376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.807 [2024-07-25 14:01:39.272386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.272400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.807 [2024-07-25 14:01:39.272409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.272423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:130832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.807 [2024-07-25 14:01:39.272433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.272447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.807 [2024-07-25 14:01:39.272456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.272471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.807 [2024-07-25 14:01:39.272480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.272494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.807 [2024-07-25 14:01:39.272503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.272518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:130712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.807 [2024-07-25 14:01:39.272527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.272542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:129824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.807 [2024-07-25 14:01:39.272551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.272565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:129896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.807 [2024-07-25 14:01:39.272574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.272591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:130256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.807 [2024-07-25 14:01:39.272600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:44.807 [2024-07-25 14:01:39.272615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.808 [2024-07-25 14:01:39.272624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.272638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.808 [2024-07-25 14:01:39.272647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.272662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.808 [2024-07-25 14:01:39.272671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.272686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.808 [2024-07-25 14:01:39.272695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.272709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.808 [2024-07-25 14:01:39.272724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.272738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.808 [2024-07-25 14:01:39.272748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.272762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.808 [2024-07-25 14:01:39.272771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.272786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:129544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.808 [2024-07-25 14:01:39.272795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.272810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.808 [2024-07-25 14:01:39.272819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.808 [2024-07-25 14:01:39.273078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.808 [2024-07-25 14:01:39.273103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:130824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.808 [2024-07-25 14:01:39.273129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:130520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.808 [2024-07-25 14:01:39.273153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.808 [2024-07-25 14:01:39.273176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:130648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.808 [2024-07-25 14:01:39.273200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:130168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.808 [2024-07-25 14:01:39.273224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.808 [2024-07-25 14:01:39.273247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.808 [2024-07-25 14:01:39.273271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.808 [2024-07-25 14:01:39.273295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.808 [2024-07-25 14:01:39.273318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.808 [2024-07-25 14:01:39.273342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.808 [2024-07-25 14:01:39.273366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.808 [2024-07-25 14:01:39.273389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.808 [2024-07-25 14:01:39.273415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.808 [2024-07-25 14:01:39.273438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.808 [2024-07-25 14:01:39.273462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.808 [2024-07-25 14:01:39.273486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.808 [2024-07-25 14:01:39.273509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.808 [2024-07-25 14:01:39.273533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.808 [2024-07-25 14:01:39.273557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.808 [2024-07-25 14:01:39.273581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.808 [2024-07-25 14:01:39.273605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.808 [2024-07-25 14:01:39.273628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.808 [2024-07-25 14:01:39.273651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.808 [2024-07-25 14:01:39.273675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.808 [2024-07-25 14:01:39.273698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.808 [2024-07-25 14:01:39.273730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.808 [2024-07-25 14:01:39.273754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:44.808 [2024-07-25 14:01:39.273769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.808 [2024-07-25 14:01:39.273778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.274358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.809 [2024-07-25 14:01:39.274375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.274391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.809 [2024-07-25 14:01:39.274400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.274415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.809 [2024-07-25 14:01:39.274424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.274439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.809 [2024-07-25 14:01:39.274448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.274462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.809 [2024-07-25 14:01:39.274472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.274486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.809 [2024-07-25 14:01:39.274495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.274510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.809 [2024-07-25 14:01:39.274519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.274533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.809 [2024-07-25 14:01:39.274542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.274557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.809 [2024-07-25 14:01:39.274566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.274583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.809 [2024-07-25 14:01:39.274592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.274607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.809 [2024-07-25 14:01:39.274616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.274630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.809 [2024-07-25 14:01:39.274640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.274654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.809 [2024-07-25 14:01:39.274663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.274678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:129824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.809 [2024-07-25 14:01:39.274687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.274701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:130256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.809 [2024-07-25 14:01:39.274710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.274731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.809 [2024-07-25 14:01:39.274741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.274755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.809 [2024-07-25 14:01:39.274764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.274779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.809 [2024-07-25 14:01:39.274788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.274802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.809 [2024-07-25 14:01:39.274812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.276267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.809 [2024-07-25 14:01:39.276285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.276302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.809 [2024-07-25 14:01:39.276311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.276325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.809 [2024-07-25 14:01:39.276337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.276359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.809 [2024-07-25 14:01:39.276368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.276382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.809 [2024-07-25 14:01:39.276391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.276406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.809 [2024-07-25 14:01:39.276415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.276430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.809 [2024-07-25 14:01:39.276439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.276453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:130168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.809 [2024-07-25 14:01:39.276462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.276477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.809 [2024-07-25 14:01:39.276486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.276500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.809 [2024-07-25 14:01:39.276509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.276524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.809 [2024-07-25 14:01:39.276533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.276547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.809 [2024-07-25 14:01:39.276557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.276571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.809 [2024-07-25 14:01:39.276580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.276595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.809 [2024-07-25 14:01:39.276604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.276618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.809 [2024-07-25 14:01:39.276629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.276644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.809 [2024-07-25 14:01:39.276653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.276667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.809 [2024-07-25 14:01:39.276676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.276691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.809 [2024-07-25 14:01:39.276700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.276719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.809 [2024-07-25 14:01:39.276729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:44.809 [2024-07-25 14:01:39.276743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:130848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.810 [2024-07-25 14:01:39.276753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.276767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.810 [2024-07-25 14:01:39.276776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.276791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.810 [2024-07-25 14:01:39.276800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.276814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.810 [2024-07-25 14:01:39.276824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.276838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.810 [2024-07-25 14:01:39.276847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.276861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.810 [2024-07-25 14:01:39.276871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.276885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.810 [2024-07-25 14:01:39.276894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.276909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.810 [2024-07-25 14:01:39.276918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.276934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.810 [2024-07-25 14:01:39.276944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.276958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.810 [2024-07-25 14:01:39.276967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.276982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.810 [2024-07-25 14:01:39.276991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.277006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.810 [2024-07-25 14:01:39.277015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.277029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.810 [2024-07-25 14:01:39.277038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.277053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.810 [2024-07-25 14:01:39.277062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.278918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.810 [2024-07-25 14:01:39.278937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.278954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.810 [2024-07-25 14:01:39.278979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.278994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.810 [2024-07-25 14:01:39.279012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.279026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.810 [2024-07-25 14:01:39.279035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.279049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.810 [2024-07-25 14:01:39.279059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.279073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.810 [2024-07-25 14:01:39.279082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.279102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.810 [2024-07-25 14:01:39.279111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.279125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.810 [2024-07-25 14:01:39.279135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.279149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.810 [2024-07-25 14:01:39.279159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.279174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.810 [2024-07-25 14:01:39.279183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.279198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.810 [2024-07-25 14:01:39.279207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.279221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.810 [2024-07-25 14:01:39.279230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.279245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.810 [2024-07-25 14:01:39.279255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.279269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.810 [2024-07-25 14:01:39.279279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.279293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.810 [2024-07-25 14:01:39.279302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.279317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.810 [2024-07-25 14:01:39.279326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.279341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.810 [2024-07-25 14:01:39.279350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.279365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.810 [2024-07-25 14:01:39.279374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.279389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.810 [2024-07-25 14:01:39.279399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.279413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.810 [2024-07-25 14:01:39.279423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.279437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.810 [2024-07-25 14:01:39.279446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.279461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.810 [2024-07-25 14:01:39.279470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.279485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.810 [2024-07-25 14:01:39.279494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.279508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.810 [2024-07-25 14:01:39.279518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:44.810 [2024-07-25 14:01:39.279532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.811 [2024-07-25 14:01:39.279542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.279556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.811 [2024-07-25 14:01:39.279565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.279580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:130824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.811 [2024-07-25 14:01:39.279590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:130168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.811 [2024-07-25 14:01:39.280020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.811 [2024-07-25 14:01:39.280046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.811 [2024-07-25 14:01:39.280069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.811 [2024-07-25 14:01:39.280095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.811 [2024-07-25 14:01:39.280119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.811 [2024-07-25 14:01:39.280143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.811 [2024-07-25 14:01:39.280167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.811 [2024-07-25 14:01:39.280190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.811 [2024-07-25 14:01:39.280214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.811 [2024-07-25 14:01:39.280237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.811 [2024-07-25 14:01:39.280261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.811 [2024-07-25 14:01:39.280285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.811 [2024-07-25 14:01:39.280308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.811 [2024-07-25 14:01:39.280332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.811 [2024-07-25 14:01:39.280355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:130864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.811 [2024-07-25 14:01:39.280379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.811 [2024-07-25 14:01:39.280404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.811 [2024-07-25 14:01:39.280428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.811 [2024-07-25 14:01:39.280452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.811 [2024-07-25 14:01:39.280476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.811 [2024-07-25 14:01:39.280499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.811 [2024-07-25 14:01:39.280523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.811 [2024-07-25 14:01:39.280547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.811 [2024-07-25 14:01:39.280571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.811 [2024-07-25 14:01:39.280594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.811 [2024-07-25 14:01:39.280618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.811 [2024-07-25 14:01:39.280642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.811 [2024-07-25 14:01:39.280666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.811 [2024-07-25 14:01:39.280691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.811 [2024-07-25 14:01:39.280720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.811 [2024-07-25 14:01:39.280744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.280758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.811 [2024-07-25 14:01:39.280768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.281239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.811 [2024-07-25 14:01:39.281254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.281271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.811 [2024-07-25 14:01:39.281281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.281295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.811 [2024-07-25 14:01:39.281305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.281319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.811 [2024-07-25 14:01:39.281328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:44.811 [2024-07-25 14:01:39.281343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.812 [2024-07-25 14:01:39.281352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.281366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.812 [2024-07-25 14:01:39.281376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.281390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.812 [2024-07-25 14:01:39.281400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.281414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.812 [2024-07-25 14:01:39.281423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.281438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.812 [2024-07-25 14:01:39.281451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.281466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.812 [2024-07-25 14:01:39.281476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.281490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.812 [2024-07-25 14:01:39.281499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.281514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.812 [2024-07-25 14:01:39.281523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.281537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.812 [2024-07-25 14:01:39.281546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.281561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.812 [2024-07-25 14:01:39.281570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.281585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.812 [2024-07-25 14:01:39.281594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.281608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.812 [2024-07-25 14:01:39.281618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.281632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.812 [2024-07-25 14:01:39.281641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.281656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.812 [2024-07-25 14:01:39.281665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.281679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.812 [2024-07-25 14:01:39.281688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.281703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.812 [2024-07-25 14:01:39.281712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.281734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.812 [2024-07-25 14:01:39.281743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.282124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.812 [2024-07-25 14:01:39.282140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.282166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.812 [2024-07-25 14:01:39.282176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.282190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.812 [2024-07-25 14:01:39.282200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.282214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.812 [2024-07-25 14:01:39.282224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.282238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.812 [2024-07-25 14:01:39.282247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.282262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.812 [2024-07-25 14:01:39.282271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.282285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.812 [2024-07-25 14:01:39.282295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.282309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.812 [2024-07-25 14:01:39.282318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.282333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.812 [2024-07-25 14:01:39.282342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.282356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.812 [2024-07-25 14:01:39.282365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.282380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.812 [2024-07-25 14:01:39.282389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.282404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.812 [2024-07-25 14:01:39.282413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.282430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.812 [2024-07-25 14:01:39.282439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.282453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.812 [2024-07-25 14:01:39.282463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.282477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.812 [2024-07-25 14:01:39.282487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.282501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.812 [2024-07-25 14:01:39.282510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:44.812 [2024-07-25 14:01:39.282525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.812 [2024-07-25 14:01:39.282534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:44.813 [2024-07-25 14:01:39.282548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.813 [2024-07-25 14:01:39.282557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:44.813 [2024-07-25 14:01:39.282572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.813 [2024-07-25 14:01:39.282581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:44.813 [2024-07-25 14:01:39.282596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.813 [2024-07-25 14:01:39.282605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:44.813 [2024-07-25 14:01:39.283620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.813 [2024-07-25 14:01:39.283637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:44.813 [2024-07-25 14:01:39.283654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.813 [2024-07-25 14:01:39.283664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:44.813 [2024-07-25 14:01:39.283678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.813 [2024-07-25 14:01:39.283688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:44.813 [2024-07-25 14:01:39.283702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.813 [2024-07-25 14:01:39.283712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:44.813 Received shutdown signal, test time was about 26.871631 seconds 00:33:44.813 00:33:44.813 Latency(us) 00:33:44.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.813 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:44.813 Verification LBA range: start 0x0 length 0x4000 00:33:44.813 Nvme0n1 : 26.87 11380.90 44.46 0.00 0.00 11227.97 766.77 3019898.88 00:33:44.813 =================================================================================================================== 00:33:44.813 Total : 11380.90 44.46 0.00 0.00 11227.97 766.77 3019898.88 00:33:44.813 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:45.072 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:45.072 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:45.072 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:45.072 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:45.072 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:33:45.072 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:45.072 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:33:45.072 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:45.072 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:45.072 rmmod nvme_tcp 00:33:45.072 rmmod nvme_fabrics 00:33:45.072 rmmod nvme_keyring 00:33:45.072 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:45.072 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:33:45.072 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:33:45.072 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 465665 ']' 00:33:45.072 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 465665 00:33:45.072 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 465665 ']' 00:33:45.072 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 465665 00:33:45.073 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:33:45.073 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:45.073 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 465665 00:33:45.073 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:45.073 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:45.073 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 465665' 00:33:45.073 killing process with pid 465665 00:33:45.073 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 465665 00:33:45.073 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 465665 00:33:45.333 14:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:45.333 14:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:45.333 14:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:45.333 14:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:45.333 14:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:45.333 14:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.333 14:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:45.333 14:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:47.869 14:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:47.869 00:33:47.869 real 0m39.331s 00:33:47.869 user 1m40.051s 00:33:47.869 sys 0m14.198s 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:47.870 ************************************ 00:33:47.870 END TEST nvmf_host_multipath_status 00:33:47.870 ************************************ 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.870 ************************************ 00:33:47.870 START TEST nvmf_discovery_remove_ifc 00:33:47.870 ************************************ 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:47.870 * Looking for test storage... 00:33:47.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:33:47.870 14:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:54.451 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:54.451 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:54.451 Found net devices under 0000:af:00.0: cvl_0_0 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:54.451 Found net devices under 0000:af:00.1: cvl_0_1 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:54.451 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:54.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:54.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:33:54.452 00:33:54.452 --- 10.0.0.2 ping statistics --- 00:33:54.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.452 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:54.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:54.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:33:54.452 00:33:54.452 --- 10.0.0.1 ping statistics --- 00:33:54.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.452 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=474437 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 474437 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 474437 ']' 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:54.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:54.452 14:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:54.452 [2024-07-25 14:01:50.622439] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:33:54.452 [2024-07-25 14:01:50.622494] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:54.452 EAL: No free 2048 kB hugepages reported on node 1 00:33:54.452 [2024-07-25 14:01:50.662578] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:54.452 [2024-07-25 14:01:50.697293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:54.452 [2024-07-25 14:01:50.735696] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:54.452 [2024-07-25 14:01:50.735741] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:54.452 [2024-07-25 14:01:50.735751] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:54.452 [2024-07-25 14:01:50.735760] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:54.452 [2024-07-25 14:01:50.735767] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:54.452 [2024-07-25 14:01:50.735787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:54.711 14:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:54.711 14:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:33:54.711 14:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:54.711 14:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:54.711 14:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:54.711 14:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:54.711 14:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:54.711 14:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.711 14:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:54.711 [2024-07-25 14:01:51.477882] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:54.711 [2024-07-25 14:01:51.486021] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:54.711 null0 00:33:54.711 [2024-07-25 14:01:51.518124] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:54.711 14:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.711 14:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=474584 00:33:54.711 14:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:54.711 14:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 474584 /tmp/host.sock 00:33:54.711 14:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 474584 ']' 00:33:54.711 14:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:33:54.711 14:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:54.711 14:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:54.711 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:54.711 14:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:54.711 14:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:54.711 [2024-07-25 14:01:51.589234] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:33:54.711 [2024-07-25 14:01:51.589285] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid474584 ] 00:33:54.971 EAL: No free 2048 kB hugepages reported on node 1 00:33:54.971 [2024-07-25 14:01:51.625530] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:54.971 [2024-07-25 14:01:51.659954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:54.971 [2024-07-25 14:01:51.699092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:55.539 14:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:55.539 14:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:33:55.539 14:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:55.539 14:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:55.539 14:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.539 14:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:55.539 14:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.539 14:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:55.539 14:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.539 14:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:55.798 14:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.798 14:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:55.798 14:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.798 14:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:56.745 [2024-07-25 14:01:53.476180] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:56.745 [2024-07-25 14:01:53.476201] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:56.745 [2024-07-25 14:01:53.476214] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:56.745 [2024-07-25 14:01:53.603603] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:57.014 [2024-07-25 14:01:53.667701] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:57.014 [2024-07-25 14:01:53.667750] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:57.014 [2024-07-25 14:01:53.667771] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:57.014 [2024-07-25 14:01:53.667785] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:57.014 [2024-07-25 14:01:53.667803] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:57.014 14:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.014 14:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:57.014 14:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:57.014 14:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:57.014 14:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:57.014 14:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.014 14:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:57.014 14:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:57.014 14:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:57.014 14:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.014 [2024-07-25 14:01:53.716284] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1e0c8f0 was disconnected and freed. delete nvme_qpair. 00:33:57.014 14:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:57.014 14:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:57.014 14:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:57.014 14:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:57.014 14:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:57.014 14:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:57.014 14:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:57.014 14:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.014 14:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:57.014 14:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:57.014 14:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:57.014 14:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.014 14:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:57.014 14:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:58.445 14:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:58.445 14:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:58.445 14:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.445 14:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:58.445 14:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:58.445 14:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:58.445 14:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:58.445 14:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.445 14:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:58.445 14:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:59.383 14:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:59.383 14:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:59.383 14:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:59.383 14:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.383 14:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:59.383 14:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:59.383 14:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:59.383 14:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.383 14:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:59.383 14:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:00.321 14:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:00.321 14:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:00.321 14:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:00.321 14:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:00.321 14:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.321 14:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:00.321 14:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:00.321 14:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.321 14:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:00.321 14:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:01.258 14:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:01.258 14:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:01.258 14:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:01.258 14:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.258 14:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:01.258 14:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:01.258 14:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:01.258 14:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.258 14:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:01.258 14:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:02.635 [2024-07-25 14:01:59.108809] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:34:02.636 [2024-07-25 14:01:59.108851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.636 [2024-07-25 14:01:59.108881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.636 [2024-07-25 14:01:59.108903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.636 [2024-07-25 14:01:59.108913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.636 [2024-07-25 14:01:59.108923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.636 [2024-07-25 14:01:59.108932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.636 [2024-07-25 14:01:59.108941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.636 [2024-07-25 14:01:59.108954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.636 [2024-07-25 14:01:59.108964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.636 [2024-07-25 14:01:59.108973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.636 [2024-07-25 14:01:59.108982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd32f0 is same with the state(5) to be set 00:34:02.636 [2024-07-25 14:01:59.118832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd32f0 (9): Bad file descriptor 00:34:02.636 14:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:02.636 [2024-07-25 14:01:59.128869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:02.636 14:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:02.636 14:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:02.636 14:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:02.636 14:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.636 14:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:02.636 14:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:03.574 [2024-07-25 14:02:00.151739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:03.574 [2024-07-25 14:02:00.151793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dd32f0 with addr=10.0.0.2, port=4420 00:34:03.574 [2024-07-25 14:02:00.151818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd32f0 is same with the state(5) to be set 00:34:03.574 [2024-07-25 14:02:00.151853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd32f0 (9): Bad file descriptor 00:34:03.574 [2024-07-25 14:02:00.152268] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:03.574 [2024-07-25 14:02:00.152301] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:03.574 [2024-07-25 14:02:00.152314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:03.574 [2024-07-25 14:02:00.152327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:03.574 [2024-07-25 14:02:00.152349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.574 [2024-07-25 14:02:00.152362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:03.574 14:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.574 14:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:03.574 14:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:04.513 [2024-07-25 14:02:01.154830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:04.513 [2024-07-25 14:02:01.154852] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:04.513 [2024-07-25 14:02:01.154862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:04.513 [2024-07-25 14:02:01.154871] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:34:04.513 [2024-07-25 14:02:01.154901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.513 [2024-07-25 14:02:01.154921] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:34:04.513 [2024-07-25 14:02:01.154949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.513 [2024-07-25 14:02:01.154961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.513 [2024-07-25 14:02:01.154973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.513 [2024-07-25 14:02:01.154982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.513 [2024-07-25 14:02:01.154992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.513 [2024-07-25 14:02:01.155001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.513 [2024-07-25 14:02:01.155011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.513 [2024-07-25 14:02:01.155020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.513 [2024-07-25 14:02:01.155030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.513 [2024-07-25 14:02:01.155040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.513 [2024-07-25 14:02:01.155049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:34:04.513 [2024-07-25 14:02:01.155096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd2790 (9): Bad file descriptor 00:34:04.513 [2024-07-25 14:02:01.156119] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:34:04.513 [2024-07-25 14:02:01.156130] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:34:04.513 14:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:04.513 14:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:04.513 14:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:04.513 14:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.513 14:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:04.513 14:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:04.513 14:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:04.513 14:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.513 14:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:34:04.513 14:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:04.513 14:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:04.513 14:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:34:04.513 14:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:04.513 14:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:04.513 14:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:04.513 14:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:04.513 14:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.513 14:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:04.513 14:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:04.513 14:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.513 14:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:04.514 14:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:05.892 14:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:05.892 14:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:05.892 14:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.892 14:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:05.892 14:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:05.892 14:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:05.892 14:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:05.892 14:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.892 14:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:05.892 14:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:06.460 [2024-07-25 14:02:03.211487] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:06.460 [2024-07-25 14:02:03.211506] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:06.460 [2024-07-25 14:02:03.211518] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:06.460 [2024-07-25 14:02:03.340911] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:34:06.719 [2024-07-25 14:02:03.402232] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:06.719 [2024-07-25 14:02:03.402266] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:06.719 [2024-07-25 14:02:03.402283] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:06.719 [2024-07-25 14:02:03.402297] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:34:06.719 [2024-07-25 14:02:03.402305] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:06.719 [2024-07-25 14:02:03.410372] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1e18a20 was disconnected and freed. delete nvme_qpair. 00:34:06.719 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:06.719 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:06.719 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:06.719 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:06.719 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.719 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:06.719 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:06.719 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.719 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:34:06.719 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:34:06.719 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 474584 00:34:06.719 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 474584 ']' 00:34:06.719 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 474584 00:34:06.719 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:34:06.719 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:06.719 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 474584 00:34:06.719 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:06.719 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:06.719 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 474584' 00:34:06.719 killing process with pid 474584 00:34:06.719 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 474584 00:34:06.719 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 474584 00:34:06.978 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:34:06.978 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:06.978 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:34:06.978 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:06.978 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:34:06.978 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:06.978 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:06.978 rmmod nvme_tcp 00:34:06.978 rmmod nvme_fabrics 00:34:06.978 rmmod nvme_keyring 00:34:06.978 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:06.978 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:34:06.978 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:34:06.978 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 474437 ']' 00:34:06.978 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 474437 00:34:06.978 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 474437 ']' 00:34:06.978 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 474437 00:34:06.978 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:34:06.978 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:06.978 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 474437 00:34:06.978 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:06.978 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:06.978 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 474437' 00:34:06.978 killing process with pid 474437 00:34:06.978 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 474437 00:34:06.978 14:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 474437 00:34:07.238 14:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:07.238 14:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:07.238 14:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:07.238 14:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:07.238 14:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:07.238 14:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:07.238 14:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:07.238 14:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:09.775 14:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:09.776 00:34:09.776 real 0m21.851s 00:34:09.776 user 0m25.978s 00:34:09.776 sys 0m6.701s 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:09.776 ************************************ 00:34:09.776 END TEST nvmf_discovery_remove_ifc 00:34:09.776 ************************************ 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.776 ************************************ 00:34:09.776 START TEST nvmf_identify_kernel_target 00:34:09.776 ************************************ 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:09.776 * Looking for test storage... 00:34:09.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:34:09.776 14:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:16.350 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:16.350 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.350 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:16.351 Found net devices under 0000:af:00.0: cvl_0_0 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:16.351 Found net devices under 0000:af:00.1: cvl_0_1 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:16.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:16.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:34:16.351 00:34:16.351 --- 10.0.0.2 ping statistics --- 00:34:16.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.351 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:16.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:16.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:34:16.351 00:34:16.351 --- 10.0.0.1 ping statistics --- 00:34:16.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.351 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:16.351 14:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:19.708 Waiting for block devices as requested 00:34:19.708 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:19.708 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:19.708 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:19.708 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:19.708 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:19.708 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:19.708 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:19.708 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:19.967 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:19.967 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:19.967 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:19.967 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:20.226 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:20.226 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:20.226 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:20.484 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:20.484 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:20.743 No valid GPT data, bailing 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:34:20.743 00:34:20.743 Discovery Log Number of Records 2, Generation counter 2 00:34:20.743 =====Discovery Log Entry 0====== 00:34:20.743 trtype: tcp 00:34:20.743 adrfam: ipv4 00:34:20.743 subtype: current discovery subsystem 00:34:20.743 treq: not specified, sq flow control disable supported 00:34:20.743 portid: 1 00:34:20.743 trsvcid: 4420 00:34:20.743 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:20.743 traddr: 10.0.0.1 00:34:20.743 eflags: none 00:34:20.743 sectype: none 00:34:20.743 =====Discovery Log Entry 1====== 00:34:20.743 trtype: tcp 00:34:20.743 adrfam: ipv4 00:34:20.743 subtype: nvme subsystem 00:34:20.743 treq: not specified, sq flow control disable supported 00:34:20.743 portid: 1 00:34:20.743 trsvcid: 4420 00:34:20.743 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:20.743 traddr: 10.0.0.1 00:34:20.743 eflags: none 00:34:20.743 sectype: none 00:34:20.743 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:34:20.743 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:34:20.743 EAL: No free 2048 kB hugepages reported on node 1 00:34:21.003 ===================================================== 00:34:21.003 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:21.003 ===================================================== 00:34:21.003 Controller Capabilities/Features 00:34:21.003 ================================ 00:34:21.003 Vendor ID: 0000 00:34:21.003 Subsystem Vendor ID: 0000 00:34:21.003 Serial Number: f23f4c931c09139af285 00:34:21.003 Model Number: Linux 00:34:21.003 Firmware Version: 6.7.0-68 00:34:21.003 Recommended Arb Burst: 0 00:34:21.003 IEEE OUI Identifier: 00 00 00 00:34:21.003 Multi-path I/O 00:34:21.003 May have multiple subsystem ports: No 00:34:21.004 May have multiple controllers: No 00:34:21.004 Associated with SR-IOV VF: No 00:34:21.004 Max Data Transfer Size: Unlimited 00:34:21.004 Max Number of Namespaces: 0 00:34:21.004 Max Number of I/O Queues: 1024 00:34:21.004 NVMe Specification Version (VS): 1.3 00:34:21.004 NVMe Specification Version (Identify): 1.3 00:34:21.004 Maximum Queue Entries: 1024 00:34:21.004 Contiguous Queues Required: No 00:34:21.004 Arbitration Mechanisms Supported 00:34:21.004 Weighted Round Robin: Not Supported 00:34:21.004 Vendor Specific: Not Supported 00:34:21.004 Reset Timeout: 7500 ms 00:34:21.004 Doorbell Stride: 4 bytes 00:34:21.004 NVM Subsystem Reset: Not Supported 00:34:21.004 Command Sets Supported 00:34:21.004 NVM Command Set: Supported 00:34:21.004 Boot Partition: Not Supported 00:34:21.004 Memory Page Size Minimum: 4096 bytes 00:34:21.004 Memory Page Size Maximum: 4096 bytes 00:34:21.004 Persistent Memory Region: Not Supported 00:34:21.004 Optional Asynchronous Events Supported 00:34:21.004 Namespace Attribute Notices: Not Supported 00:34:21.004 Firmware Activation Notices: Not Supported 00:34:21.004 ANA Change Notices: Not Supported 00:34:21.004 PLE Aggregate Log Change Notices: Not Supported 00:34:21.004 LBA Status Info Alert Notices: Not Supported 00:34:21.004 EGE Aggregate Log Change Notices: Not Supported 00:34:21.004 Normal NVM Subsystem Shutdown event: Not Supported 00:34:21.004 Zone Descriptor Change Notices: Not Supported 00:34:21.004 Discovery Log Change Notices: Supported 00:34:21.004 Controller Attributes 00:34:21.004 128-bit Host Identifier: Not Supported 00:34:21.004 Non-Operational Permissive Mode: Not Supported 00:34:21.004 NVM Sets: Not Supported 00:34:21.004 Read Recovery Levels: Not Supported 00:34:21.004 Endurance Groups: Not Supported 00:34:21.004 Predictable Latency Mode: Not Supported 00:34:21.004 Traffic Based Keep ALive: Not Supported 00:34:21.004 Namespace Granularity: Not Supported 00:34:21.004 SQ Associations: Not Supported 00:34:21.004 UUID List: Not Supported 00:34:21.004 Multi-Domain Subsystem: Not Supported 00:34:21.004 Fixed Capacity Management: Not Supported 00:34:21.004 Variable Capacity Management: Not Supported 00:34:21.004 Delete Endurance Group: Not Supported 00:34:21.004 Delete NVM Set: Not Supported 00:34:21.004 Extended LBA Formats Supported: Not Supported 00:34:21.004 Flexible Data Placement Supported: Not Supported 00:34:21.004 00:34:21.004 Controller Memory Buffer Support 00:34:21.004 ================================ 00:34:21.004 Supported: No 00:34:21.004 00:34:21.004 Persistent Memory Region Support 00:34:21.004 ================================ 00:34:21.004 Supported: No 00:34:21.004 00:34:21.004 Admin Command Set Attributes 00:34:21.004 ============================ 00:34:21.004 Security Send/Receive: Not Supported 00:34:21.004 Format NVM: Not Supported 00:34:21.004 Firmware Activate/Download: Not Supported 00:34:21.004 Namespace Management: Not Supported 00:34:21.004 Device Self-Test: Not Supported 00:34:21.004 Directives: Not Supported 00:34:21.004 NVMe-MI: Not Supported 00:34:21.004 Virtualization Management: Not Supported 00:34:21.004 Doorbell Buffer Config: Not Supported 00:34:21.004 Get LBA Status Capability: Not Supported 00:34:21.004 Command & Feature Lockdown Capability: Not Supported 00:34:21.004 Abort Command Limit: 1 00:34:21.004 Async Event Request Limit: 1 00:34:21.004 Number of Firmware Slots: N/A 00:34:21.004 Firmware Slot 1 Read-Only: N/A 00:34:21.004 Firmware Activation Without Reset: N/A 00:34:21.004 Multiple Update Detection Support: N/A 00:34:21.004 Firmware Update Granularity: No Information Provided 00:34:21.004 Per-Namespace SMART Log: No 00:34:21.004 Asymmetric Namespace Access Log Page: Not Supported 00:34:21.004 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:21.004 Command Effects Log Page: Not Supported 00:34:21.004 Get Log Page Extended Data: Supported 00:34:21.004 Telemetry Log Pages: Not Supported 00:34:21.004 Persistent Event Log Pages: Not Supported 00:34:21.004 Supported Log Pages Log Page: May Support 00:34:21.004 Commands Supported & Effects Log Page: Not Supported 00:34:21.004 Feature Identifiers & Effects Log Page:May Support 00:34:21.004 NVMe-MI Commands & Effects Log Page: May Support 00:34:21.004 Data Area 4 for Telemetry Log: Not Supported 00:34:21.004 Error Log Page Entries Supported: 1 00:34:21.004 Keep Alive: Not Supported 00:34:21.004 00:34:21.004 NVM Command Set Attributes 00:34:21.004 ========================== 00:34:21.004 Submission Queue Entry Size 00:34:21.004 Max: 1 00:34:21.004 Min: 1 00:34:21.004 Completion Queue Entry Size 00:34:21.004 Max: 1 00:34:21.004 Min: 1 00:34:21.004 Number of Namespaces: 0 00:34:21.004 Compare Command: Not Supported 00:34:21.004 Write Uncorrectable Command: Not Supported 00:34:21.004 Dataset Management Command: Not Supported 00:34:21.004 Write Zeroes Command: Not Supported 00:34:21.004 Set Features Save Field: Not Supported 00:34:21.004 Reservations: Not Supported 00:34:21.004 Timestamp: Not Supported 00:34:21.004 Copy: Not Supported 00:34:21.004 Volatile Write Cache: Not Present 00:34:21.004 Atomic Write Unit (Normal): 1 00:34:21.004 Atomic Write Unit (PFail): 1 00:34:21.004 Atomic Compare & Write Unit: 1 00:34:21.004 Fused Compare & Write: Not Supported 00:34:21.004 Scatter-Gather List 00:34:21.004 SGL Command Set: Supported 00:34:21.004 SGL Keyed: Not Supported 00:34:21.004 SGL Bit Bucket Descriptor: Not Supported 00:34:21.004 SGL Metadata Pointer: Not Supported 00:34:21.004 Oversized SGL: Not Supported 00:34:21.004 SGL Metadata Address: Not Supported 00:34:21.004 SGL Offset: Supported 00:34:21.004 Transport SGL Data Block: Not Supported 00:34:21.004 Replay Protected Memory Block: Not Supported 00:34:21.004 00:34:21.004 Firmware Slot Information 00:34:21.004 ========================= 00:34:21.004 Active slot: 0 00:34:21.004 00:34:21.004 00:34:21.004 Error Log 00:34:21.004 ========= 00:34:21.004 00:34:21.004 Active Namespaces 00:34:21.004 ================= 00:34:21.004 Discovery Log Page 00:34:21.004 ================== 00:34:21.004 Generation Counter: 2 00:34:21.004 Number of Records: 2 00:34:21.004 Record Format: 0 00:34:21.004 00:34:21.004 Discovery Log Entry 0 00:34:21.004 ---------------------- 00:34:21.004 Transport Type: 3 (TCP) 00:34:21.004 Address Family: 1 (IPv4) 00:34:21.004 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:21.004 Entry Flags: 00:34:21.004 Duplicate Returned Information: 0 00:34:21.004 Explicit Persistent Connection Support for Discovery: 0 00:34:21.004 Transport Requirements: 00:34:21.004 Secure Channel: Not Specified 00:34:21.004 Port ID: 1 (0x0001) 00:34:21.004 Controller ID: 65535 (0xffff) 00:34:21.004 Admin Max SQ Size: 32 00:34:21.004 Transport Service Identifier: 4420 00:34:21.004 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:21.004 Transport Address: 10.0.0.1 00:34:21.004 Discovery Log Entry 1 00:34:21.004 ---------------------- 00:34:21.004 Transport Type: 3 (TCP) 00:34:21.004 Address Family: 1 (IPv4) 00:34:21.004 Subsystem Type: 2 (NVM Subsystem) 00:34:21.004 Entry Flags: 00:34:21.004 Duplicate Returned Information: 0 00:34:21.004 Explicit Persistent Connection Support for Discovery: 0 00:34:21.004 Transport Requirements: 00:34:21.004 Secure Channel: Not Specified 00:34:21.004 Port ID: 1 (0x0001) 00:34:21.004 Controller ID: 65535 (0xffff) 00:34:21.004 Admin Max SQ Size: 32 00:34:21.004 Transport Service Identifier: 4420 00:34:21.004 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:34:21.004 Transport Address: 10.0.0.1 00:34:21.004 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:21.004 EAL: No free 2048 kB hugepages reported on node 1 00:34:21.004 get_feature(0x01) failed 00:34:21.004 get_feature(0x02) failed 00:34:21.004 get_feature(0x04) failed 00:34:21.004 ===================================================== 00:34:21.004 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:21.004 ===================================================== 00:34:21.004 Controller Capabilities/Features 00:34:21.004 ================================ 00:34:21.004 Vendor ID: 0000 00:34:21.005 Subsystem Vendor ID: 0000 00:34:21.005 Serial Number: 72fdb37a043c9279a155 00:34:21.005 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:34:21.005 Firmware Version: 6.7.0-68 00:34:21.005 Recommended Arb Burst: 6 00:34:21.005 IEEE OUI Identifier: 00 00 00 00:34:21.005 Multi-path I/O 00:34:21.005 May have multiple subsystem ports: Yes 00:34:21.005 May have multiple controllers: Yes 00:34:21.005 Associated with SR-IOV VF: No 00:34:21.005 Max Data Transfer Size: Unlimited 00:34:21.005 Max Number of Namespaces: 1024 00:34:21.005 Max Number of I/O Queues: 128 00:34:21.005 NVMe Specification Version (VS): 1.3 00:34:21.005 NVMe Specification Version (Identify): 1.3 00:34:21.005 Maximum Queue Entries: 1024 00:34:21.005 Contiguous Queues Required: No 00:34:21.005 Arbitration Mechanisms Supported 00:34:21.005 Weighted Round Robin: Not Supported 00:34:21.005 Vendor Specific: Not Supported 00:34:21.005 Reset Timeout: 7500 ms 00:34:21.005 Doorbell Stride: 4 bytes 00:34:21.005 NVM Subsystem Reset: Not Supported 00:34:21.005 Command Sets Supported 00:34:21.005 NVM Command Set: Supported 00:34:21.005 Boot Partition: Not Supported 00:34:21.005 Memory Page Size Minimum: 4096 bytes 00:34:21.005 Memory Page Size Maximum: 4096 bytes 00:34:21.005 Persistent Memory Region: Not Supported 00:34:21.005 Optional Asynchronous Events Supported 00:34:21.005 Namespace Attribute Notices: Supported 00:34:21.005 Firmware Activation Notices: Not Supported 00:34:21.005 ANA Change Notices: Supported 00:34:21.005 PLE Aggregate Log Change Notices: Not Supported 00:34:21.005 LBA Status Info Alert Notices: Not Supported 00:34:21.005 EGE Aggregate Log Change Notices: Not Supported 00:34:21.005 Normal NVM Subsystem Shutdown event: Not Supported 00:34:21.005 Zone Descriptor Change Notices: Not Supported 00:34:21.005 Discovery Log Change Notices: Not Supported 00:34:21.005 Controller Attributes 00:34:21.005 128-bit Host Identifier: Supported 00:34:21.005 Non-Operational Permissive Mode: Not Supported 00:34:21.005 NVM Sets: Not Supported 00:34:21.005 Read Recovery Levels: Not Supported 00:34:21.005 Endurance Groups: Not Supported 00:34:21.005 Predictable Latency Mode: Not Supported 00:34:21.005 Traffic Based Keep ALive: Supported 00:34:21.005 Namespace Granularity: Not Supported 00:34:21.005 SQ Associations: Not Supported 00:34:21.005 UUID List: Not Supported 00:34:21.005 Multi-Domain Subsystem: Not Supported 00:34:21.005 Fixed Capacity Management: Not Supported 00:34:21.005 Variable Capacity Management: Not Supported 00:34:21.005 Delete Endurance Group: Not Supported 00:34:21.005 Delete NVM Set: Not Supported 00:34:21.005 Extended LBA Formats Supported: Not Supported 00:34:21.005 Flexible Data Placement Supported: Not Supported 00:34:21.005 00:34:21.005 Controller Memory Buffer Support 00:34:21.005 ================================ 00:34:21.005 Supported: No 00:34:21.005 00:34:21.005 Persistent Memory Region Support 00:34:21.005 ================================ 00:34:21.005 Supported: No 00:34:21.005 00:34:21.005 Admin Command Set Attributes 00:34:21.005 ============================ 00:34:21.005 Security Send/Receive: Not Supported 00:34:21.005 Format NVM: Not Supported 00:34:21.005 Firmware Activate/Download: Not Supported 00:34:21.005 Namespace Management: Not Supported 00:34:21.005 Device Self-Test: Not Supported 00:34:21.005 Directives: Not Supported 00:34:21.005 NVMe-MI: Not Supported 00:34:21.005 Virtualization Management: Not Supported 00:34:21.005 Doorbell Buffer Config: Not Supported 00:34:21.005 Get LBA Status Capability: Not Supported 00:34:21.005 Command & Feature Lockdown Capability: Not Supported 00:34:21.005 Abort Command Limit: 4 00:34:21.005 Async Event Request Limit: 4 00:34:21.005 Number of Firmware Slots: N/A 00:34:21.005 Firmware Slot 1 Read-Only: N/A 00:34:21.005 Firmware Activation Without Reset: N/A 00:34:21.005 Multiple Update Detection Support: N/A 00:34:21.005 Firmware Update Granularity: No Information Provided 00:34:21.005 Per-Namespace SMART Log: Yes 00:34:21.005 Asymmetric Namespace Access Log Page: Supported 00:34:21.005 ANA Transition Time : 10 sec 00:34:21.005 00:34:21.005 Asymmetric Namespace Access Capabilities 00:34:21.005 ANA Optimized State : Supported 00:34:21.005 ANA Non-Optimized State : Supported 00:34:21.005 ANA Inaccessible State : Supported 00:34:21.005 ANA Persistent Loss State : Supported 00:34:21.005 ANA Change State : Supported 00:34:21.005 ANAGRPID is not changed : No 00:34:21.005 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:34:21.005 00:34:21.005 ANA Group Identifier Maximum : 128 00:34:21.005 Number of ANA Group Identifiers : 128 00:34:21.005 Max Number of Allowed Namespaces : 1024 00:34:21.005 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:34:21.005 Command Effects Log Page: Supported 00:34:21.005 Get Log Page Extended Data: Supported 00:34:21.005 Telemetry Log Pages: Not Supported 00:34:21.005 Persistent Event Log Pages: Not Supported 00:34:21.005 Supported Log Pages Log Page: May Support 00:34:21.005 Commands Supported & Effects Log Page: Not Supported 00:34:21.005 Feature Identifiers & Effects Log Page:May Support 00:34:21.005 NVMe-MI Commands & Effects Log Page: May Support 00:34:21.005 Data Area 4 for Telemetry Log: Not Supported 00:34:21.005 Error Log Page Entries Supported: 128 00:34:21.005 Keep Alive: Supported 00:34:21.005 Keep Alive Granularity: 1000 ms 00:34:21.005 00:34:21.005 NVM Command Set Attributes 00:34:21.005 ========================== 00:34:21.005 Submission Queue Entry Size 00:34:21.005 Max: 64 00:34:21.005 Min: 64 00:34:21.005 Completion Queue Entry Size 00:34:21.005 Max: 16 00:34:21.005 Min: 16 00:34:21.005 Number of Namespaces: 1024 00:34:21.005 Compare Command: Not Supported 00:34:21.005 Write Uncorrectable Command: Not Supported 00:34:21.005 Dataset Management Command: Supported 00:34:21.005 Write Zeroes Command: Supported 00:34:21.005 Set Features Save Field: Not Supported 00:34:21.005 Reservations: Not Supported 00:34:21.005 Timestamp: Not Supported 00:34:21.005 Copy: Not Supported 00:34:21.005 Volatile Write Cache: Present 00:34:21.005 Atomic Write Unit (Normal): 1 00:34:21.005 Atomic Write Unit (PFail): 1 00:34:21.005 Atomic Compare & Write Unit: 1 00:34:21.005 Fused Compare & Write: Not Supported 00:34:21.005 Scatter-Gather List 00:34:21.005 SGL Command Set: Supported 00:34:21.005 SGL Keyed: Not Supported 00:34:21.005 SGL Bit Bucket Descriptor: Not Supported 00:34:21.005 SGL Metadata Pointer: Not Supported 00:34:21.005 Oversized SGL: Not Supported 00:34:21.005 SGL Metadata Address: Not Supported 00:34:21.005 SGL Offset: Supported 00:34:21.005 Transport SGL Data Block: Not Supported 00:34:21.005 Replay Protected Memory Block: Not Supported 00:34:21.005 00:34:21.005 Firmware Slot Information 00:34:21.005 ========================= 00:34:21.005 Active slot: 0 00:34:21.005 00:34:21.005 Asymmetric Namespace Access 00:34:21.005 =========================== 00:34:21.005 Change Count : 0 00:34:21.005 Number of ANA Group Descriptors : 1 00:34:21.005 ANA Group Descriptor : 0 00:34:21.005 ANA Group ID : 1 00:34:21.005 Number of NSID Values : 1 00:34:21.005 Change Count : 0 00:34:21.005 ANA State : 1 00:34:21.005 Namespace Identifier : 1 00:34:21.005 00:34:21.005 Commands Supported and Effects 00:34:21.005 ============================== 00:34:21.005 Admin Commands 00:34:21.005 -------------- 00:34:21.005 Get Log Page (02h): Supported 00:34:21.005 Identify (06h): Supported 00:34:21.005 Abort (08h): Supported 00:34:21.005 Set Features (09h): Supported 00:34:21.005 Get Features (0Ah): Supported 00:34:21.005 Asynchronous Event Request (0Ch): Supported 00:34:21.005 Keep Alive (18h): Supported 00:34:21.005 I/O Commands 00:34:21.005 ------------ 00:34:21.005 Flush (00h): Supported 00:34:21.005 Write (01h): Supported LBA-Change 00:34:21.005 Read (02h): Supported 00:34:21.005 Write Zeroes (08h): Supported LBA-Change 00:34:21.005 Dataset Management (09h): Supported 00:34:21.005 00:34:21.005 Error Log 00:34:21.005 ========= 00:34:21.005 Entry: 0 00:34:21.005 Error Count: 0x3 00:34:21.005 Submission Queue Id: 0x0 00:34:21.005 Command Id: 0x5 00:34:21.005 Phase Bit: 0 00:34:21.005 Status Code: 0x2 00:34:21.005 Status Code Type: 0x0 00:34:21.005 Do Not Retry: 1 00:34:21.005 Error Location: 0x28 00:34:21.005 LBA: 0x0 00:34:21.005 Namespace: 0x0 00:34:21.005 Vendor Log Page: 0x0 00:34:21.005 ----------- 00:34:21.005 Entry: 1 00:34:21.005 Error Count: 0x2 00:34:21.005 Submission Queue Id: 0x0 00:34:21.006 Command Id: 0x5 00:34:21.006 Phase Bit: 0 00:34:21.006 Status Code: 0x2 00:34:21.006 Status Code Type: 0x0 00:34:21.006 Do Not Retry: 1 00:34:21.006 Error Location: 0x28 00:34:21.006 LBA: 0x0 00:34:21.006 Namespace: 0x0 00:34:21.006 Vendor Log Page: 0x0 00:34:21.006 ----------- 00:34:21.006 Entry: 2 00:34:21.006 Error Count: 0x1 00:34:21.006 Submission Queue Id: 0x0 00:34:21.006 Command Id: 0x4 00:34:21.006 Phase Bit: 0 00:34:21.006 Status Code: 0x2 00:34:21.006 Status Code Type: 0x0 00:34:21.006 Do Not Retry: 1 00:34:21.006 Error Location: 0x28 00:34:21.006 LBA: 0x0 00:34:21.006 Namespace: 0x0 00:34:21.006 Vendor Log Page: 0x0 00:34:21.006 00:34:21.006 Number of Queues 00:34:21.006 ================ 00:34:21.006 Number of I/O Submission Queues: 128 00:34:21.006 Number of I/O Completion Queues: 128 00:34:21.006 00:34:21.006 ZNS Specific Controller Data 00:34:21.006 ============================ 00:34:21.006 Zone Append Size Limit: 0 00:34:21.006 00:34:21.006 00:34:21.006 Active Namespaces 00:34:21.006 ================= 00:34:21.006 get_feature(0x05) failed 00:34:21.006 Namespace ID:1 00:34:21.006 Command Set Identifier: NVM (00h) 00:34:21.006 Deallocate: Supported 00:34:21.006 Deallocated/Unwritten Error: Not Supported 00:34:21.006 Deallocated Read Value: Unknown 00:34:21.006 Deallocate in Write Zeroes: Not Supported 00:34:21.006 Deallocated Guard Field: 0xFFFF 00:34:21.006 Flush: Supported 00:34:21.006 Reservation: Not Supported 00:34:21.006 Namespace Sharing Capabilities: Multiple Controllers 00:34:21.006 Size (in LBAs): 3125627568 (1490GiB) 00:34:21.006 Capacity (in LBAs): 3125627568 (1490GiB) 00:34:21.006 Utilization (in LBAs): 3125627568 (1490GiB) 00:34:21.006 UUID: ed7e1dcf-3e10-49b2-b120-c00653cce20a 00:34:21.006 Thin Provisioning: Not Supported 00:34:21.006 Per-NS Atomic Units: Yes 00:34:21.006 Atomic Boundary Size (Normal): 0 00:34:21.006 Atomic Boundary Size (PFail): 0 00:34:21.006 Atomic Boundary Offset: 0 00:34:21.006 NGUID/EUI64 Never Reused: No 00:34:21.006 ANA group ID: 1 00:34:21.006 Namespace Write Protected: No 00:34:21.006 Number of LBA Formats: 1 00:34:21.006 Current LBA Format: LBA Format #00 00:34:21.006 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:21.006 00:34:21.006 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:34:21.006 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:21.006 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:34:21.006 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:21.006 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:34:21.006 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:21.006 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:21.006 rmmod nvme_tcp 00:34:21.006 rmmod nvme_fabrics 00:34:21.006 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:21.006 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:34:21.006 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:34:21.006 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:34:21.006 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:21.006 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:21.006 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:21.006 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:21.006 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:21.006 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:21.006 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:21.006 14:02:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:23.543 14:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:23.543 14:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:23.543 14:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:23.543 14:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:34:23.543 14:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:23.543 14:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:23.543 14:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:23.543 14:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:23.543 14:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:34:23.543 14:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:34:23.543 14:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:26.080 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:26.080 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:26.080 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:26.080 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:26.080 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:26.080 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:26.080 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:26.080 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:26.080 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:26.080 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:26.080 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:26.339 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:26.339 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:26.339 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:26.339 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:26.339 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:28.252 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:34:28.252 00:34:28.252 real 0m18.584s 00:34:28.252 user 0m4.205s 00:34:28.252 sys 0m9.813s 00:34:28.252 14:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:28.252 14:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:28.252 ************************************ 00:34:28.252 END TEST nvmf_identify_kernel_target 00:34:28.252 ************************************ 00:34:28.252 14:02:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:28.252 14:02:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:28.252 14:02:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:28.252 14:02:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.252 ************************************ 00:34:28.252 START TEST nvmf_auth_host 00:34:28.252 ************************************ 00:34:28.252 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:28.252 * Looking for test storage... 00:34:28.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:28.252 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:28.252 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:28.252 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:28.252 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:28.252 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:28.252 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:34:28.253 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:34.829 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:34.829 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:34.830 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:34.830 Found net devices under 0000:af:00.0: cvl_0_0 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:34.830 Found net devices under 0000:af:00.1: cvl_0_1 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:34.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:34.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:34:34.830 00:34:34.830 --- 10.0.0.2 ping statistics --- 00:34:34.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:34.830 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:34.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:34.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:34:34.830 00:34:34.830 --- 10.0.0.1 ping statistics --- 00:34:34.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:34.830 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=486961 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 486961 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 486961 ']' 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:34.830 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.768 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:35.768 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:34:35.768 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:35.768 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:35.768 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.768 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:35.768 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:35.768 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:35.768 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:35.768 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:35.768 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:35.768 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:34:35.768 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:35.768 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:35.768 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f8b04ebbbf042659f2dd0c437cfbef8f 00:34:35.768 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:34:35.768 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.i4U 00:34:35.768 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f8b04ebbbf042659f2dd0c437cfbef8f 0 00:34:35.768 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f8b04ebbbf042659f2dd0c437cfbef8f 0 00:34:35.768 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:35.768 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:35.768 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f8b04ebbbf042659f2dd0c437cfbef8f 00:34:35.768 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.i4U 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.i4U 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.i4U 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5ac36bc9c60ee99961706864c958f3e2836ea05b08bcf11e6322782b9f406d18 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.t2f 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5ac36bc9c60ee99961706864c958f3e2836ea05b08bcf11e6322782b9f406d18 3 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5ac36bc9c60ee99961706864c958f3e2836ea05b08bcf11e6322782b9f406d18 3 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5ac36bc9c60ee99961706864c958f3e2836ea05b08bcf11e6322782b9f406d18 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.t2f 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.t2f 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.t2f 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6c7293e2a3a898a1be94111cdd8884b63dd93dae8d35d85c 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.13z 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6c7293e2a3a898a1be94111cdd8884b63dd93dae8d35d85c 0 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6c7293e2a3a898a1be94111cdd8884b63dd93dae8d35d85c 0 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6c7293e2a3a898a1be94111cdd8884b63dd93dae8d35d85c 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.13z 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.13z 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.13z 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bd381ffd5292d7ec5a6167cbed8d8232b877dba41f41ddf8 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ZMU 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bd381ffd5292d7ec5a6167cbed8d8232b877dba41f41ddf8 2 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bd381ffd5292d7ec5a6167cbed8d8232b877dba41f41ddf8 2 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bd381ffd5292d7ec5a6167cbed8d8232b877dba41f41ddf8 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:34:35.769 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ZMU 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ZMU 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.ZMU 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1cb690d0dc868115a642afee2d0bd64a 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.AZA 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1cb690d0dc868115a642afee2d0bd64a 1 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1cb690d0dc868115a642afee2d0bd64a 1 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1cb690d0dc868115a642afee2d0bd64a 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.AZA 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.AZA 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.AZA 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ec7e2d01e448079ffee051a8a17915db 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.uH1 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ec7e2d01e448079ffee051a8a17915db 1 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ec7e2d01e448079ffee051a8a17915db 1 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ec7e2d01e448079ffee051a8a17915db 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.uH1 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.uH1 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.uH1 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e8005f182cc17a3c2ad657b055dc64fab40f2b10bfe05bb6 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.NOZ 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e8005f182cc17a3c2ad657b055dc64fab40f2b10bfe05bb6 2 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e8005f182cc17a3c2ad657b055dc64fab40f2b10bfe05bb6 2 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e8005f182cc17a3c2ad657b055dc64fab40f2b10bfe05bb6 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.NOZ 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.NOZ 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.NOZ 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=628824b2078d09c77130fbfcdbde6b31 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.W8N 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 628824b2078d09c77130fbfcdbde6b31 0 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 628824b2078d09c77130fbfcdbde6b31 0 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=628824b2078d09c77130fbfcdbde6b31 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:34:36.029 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:36.289 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.W8N 00:34:36.289 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.W8N 00:34:36.289 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.W8N 00:34:36.289 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:36.289 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:36.289 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:36.289 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:36.289 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:34:36.289 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:34:36.289 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:36.289 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a1b9482f5789abc99ccacd53a2dc845787f70b8e1c0897e218e64bfdcc6c1549 00:34:36.289 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:34:36.289 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.lx4 00:34:36.289 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a1b9482f5789abc99ccacd53a2dc845787f70b8e1c0897e218e64bfdcc6c1549 3 00:34:36.289 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a1b9482f5789abc99ccacd53a2dc845787f70b8e1c0897e218e64bfdcc6c1549 3 00:34:36.289 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:36.289 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:36.289 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a1b9482f5789abc99ccacd53a2dc845787f70b8e1c0897e218e64bfdcc6c1549 00:34:36.289 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:34:36.289 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:36.289 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.lx4 00:34:36.289 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.lx4 00:34:36.289 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.lx4 00:34:36.289 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:36.289 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 486961 00:34:36.289 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 486961 ']' 00:34:36.289 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:36.289 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:36.289 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:36.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:36.289 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:36.289 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.549 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:36.549 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:34:36.549 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:36.549 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.i4U 00:34:36.549 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.549 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.549 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.549 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.t2f ]] 00:34:36.549 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.t2f 00:34:36.549 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.549 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.549 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.549 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:36.549 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.13z 00:34:36.549 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.549 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.ZMU ]] 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ZMU 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.AZA 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.uH1 ]] 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uH1 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.NOZ 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.W8N ]] 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.W8N 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.lx4 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:36.550 14:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:39.871 Waiting for block devices as requested 00:34:39.871 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:39.871 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:39.871 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:39.871 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:39.871 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:39.871 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:39.871 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:40.130 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:40.130 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:40.130 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:40.389 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:40.389 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:40.389 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:40.648 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:40.648 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:40.648 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:40.907 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:34:41.476 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:34:41.476 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:41.476 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:34:41.476 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:34:41.476 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:41.476 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:34:41.476 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:34:41.476 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:34:41.476 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:41.735 No valid GPT data, bailing 00:34:41.735 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:41.735 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:34:41.736 00:34:41.736 Discovery Log Number of Records 2, Generation counter 2 00:34:41.736 =====Discovery Log Entry 0====== 00:34:41.736 trtype: tcp 00:34:41.736 adrfam: ipv4 00:34:41.736 subtype: current discovery subsystem 00:34:41.736 treq: not specified, sq flow control disable supported 00:34:41.736 portid: 1 00:34:41.736 trsvcid: 4420 00:34:41.736 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:41.736 traddr: 10.0.0.1 00:34:41.736 eflags: none 00:34:41.736 sectype: none 00:34:41.736 =====Discovery Log Entry 1====== 00:34:41.736 trtype: tcp 00:34:41.736 adrfam: ipv4 00:34:41.736 subtype: nvme subsystem 00:34:41.736 treq: not specified, sq flow control disable supported 00:34:41.736 portid: 1 00:34:41.736 trsvcid: 4420 00:34:41.736 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:41.736 traddr: 10.0.0.1 00:34:41.736 eflags: none 00:34:41.736 sectype: none 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: ]] 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.736 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.996 nvme0n1 00:34:41.996 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.996 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.996 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.996 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.996 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.996 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.996 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.996 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.996 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.996 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.996 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.996 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:41.996 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:41.996 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.996 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:41.996 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.996 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:41.996 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:41.996 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:41.996 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:34:41.996 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:34:41.996 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:41.996 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:41.996 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:34:41.996 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: ]] 00:34:41.997 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:34:41.997 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:41.997 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.997 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:41.997 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:41.997 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:41.997 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.997 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:41.997 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.997 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.997 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.997 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.997 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:41.997 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:41.997 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:41.997 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.997 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.997 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:41.997 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.997 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:41.997 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:41.997 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:41.997 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:41.997 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.997 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.257 nvme0n1 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: ]] 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.257 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.257 nvme0n1 00:34:42.257 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.257 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.257 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.257 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.257 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.257 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: ]] 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.517 nvme0n1 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:42.517 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:42.518 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:42.518 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:34:42.518 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:34:42.518 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:42.518 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:42.518 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:34:42.518 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: ]] 00:34:42.518 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:34:42.518 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:42.518 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.518 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:42.518 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:42.518 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:42.518 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.518 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:42.518 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.518 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.518 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.518 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.518 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:42.518 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:42.518 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:42.518 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.518 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.777 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:42.777 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.777 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:42.777 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:42.777 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:42.777 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:42.777 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.777 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.777 nvme0n1 00:34:42.777 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.777 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.777 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.777 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.777 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.777 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.777 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.777 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.777 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.777 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.777 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.777 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.777 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:42.777 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.777 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:42.777 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:42.777 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:42.777 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.778 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.037 nvme0n1 00:34:43.037 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.037 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.037 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.037 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.037 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.037 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.037 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.037 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.037 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.037 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.037 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.037 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:43.037 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.037 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:43.037 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.037 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:43.037 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:43.037 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:43.037 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:34:43.037 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:34:43.037 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:43.037 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:43.038 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:34:43.038 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: ]] 00:34:43.038 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:34:43.038 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:43.038 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.038 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:43.038 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:43.038 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:43.038 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.038 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:43.038 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.038 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.038 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.038 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.038 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:43.038 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:43.038 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:43.038 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.038 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.038 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:43.038 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.038 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:43.038 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:43.038 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:43.038 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:43.038 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.038 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.297 nvme0n1 00:34:43.297 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.297 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.297 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.297 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.297 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.297 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: ]] 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.297 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.557 nvme0n1 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: ]] 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.557 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.817 nvme0n1 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: ]] 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.817 nvme0n1 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.817 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.077 nvme0n1 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.077 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: ]] 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.337 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.337 nvme0n1 00:34:44.596 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.596 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.596 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.596 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.596 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.596 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.596 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.596 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.596 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.596 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.596 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.596 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.596 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:44.596 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.596 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:44.596 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:44.596 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:44.596 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:34:44.596 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:34:44.596 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: ]] 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.597 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.856 nvme0n1 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: ]] 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.856 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.115 nvme0n1 00:34:45.115 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.115 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.115 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.115 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.115 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.115 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.115 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.115 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.115 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.115 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.115 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.115 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: ]] 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.116 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.375 nvme0n1 00:34:45.375 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.375 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.375 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.375 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.375 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.375 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.375 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.375 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.375 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.375 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.375 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.375 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.375 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:45.375 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.375 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.376 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.635 nvme0n1 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: ]] 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.636 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.895 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.895 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:45.895 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:45.895 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:45.895 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.895 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.895 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:45.895 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.895 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:45.895 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:45.895 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:45.895 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:45.895 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.895 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.155 nvme0n1 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: ]] 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:46.155 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:46.156 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.156 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.156 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:46.156 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.156 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:46.156 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:46.156 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:46.156 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:46.156 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.156 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.724 nvme0n1 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: ]] 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:46.724 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.725 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.983 nvme0n1 00:34:46.983 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.983 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.983 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.983 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.983 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.983 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.983 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.983 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.983 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.983 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.983 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.983 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.983 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:46.983 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.983 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:46.983 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:46.983 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:46.983 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:34:46.983 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: ]] 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.984 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.551 nvme0n1 00:34:47.551 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.551 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.551 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.551 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.551 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.551 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.552 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.811 nvme0n1 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: ]] 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.811 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.812 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:47.812 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:47.812 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:47.812 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.812 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.812 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:47.812 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.812 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:47.812 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:47.812 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:47.812 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:47.812 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.812 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.380 nvme0n1 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: ]] 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:48.380 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:48.381 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.381 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:48.381 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.381 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.381 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.381 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.381 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:48.381 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:48.381 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:48.381 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.381 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.381 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:48.381 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.381 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:48.381 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:48.381 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:48.381 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:48.381 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.381 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.948 nvme0n1 00:34:48.948 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.948 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.948 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.948 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.948 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.948 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: ]] 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.208 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.776 nvme0n1 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: ]] 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.776 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.345 nvme0n1 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:50.345 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:50.346 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:50.346 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:50.346 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.346 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.914 nvme0n1 00:34:50.914 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.914 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.914 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.914 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.914 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.914 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.914 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.914 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.914 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.914 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.914 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.914 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:50.914 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:50.914 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.914 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:50.914 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.914 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:50.914 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:50.914 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:50.914 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:34:50.914 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:34:50.914 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: ]] 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.915 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.174 nvme0n1 00:34:51.174 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.174 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.174 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.174 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.174 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.174 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.174 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.174 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.174 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.174 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.174 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.174 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.174 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:51.174 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.174 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:51.174 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:51.174 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:51.174 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:34:51.174 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:34:51.174 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:51.174 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:51.174 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:34:51.174 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: ]] 00:34:51.174 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:34:51.175 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:51.175 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.175 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:51.175 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:51.175 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:51.175 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.175 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:51.175 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.175 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.175 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.175 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.175 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:51.175 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:51.175 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:51.175 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.175 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.175 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:51.175 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.175 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:51.175 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:51.175 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:51.175 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:51.175 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.175 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.434 nvme0n1 00:34:51.434 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.434 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.434 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.434 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: ]] 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.435 nvme0n1 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.435 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.695 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.695 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.695 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:51.695 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.695 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:51.695 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:51.695 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:51.695 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:34:51.695 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:34:51.695 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:51.695 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:51.695 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:34:51.695 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: ]] 00:34:51.695 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.696 nvme0n1 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.696 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.956 nvme0n1 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: ]] 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:51.956 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.957 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.242 nvme0n1 00:34:52.242 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.242 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.242 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.242 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.242 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.242 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.242 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.242 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.242 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.242 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: ]] 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.242 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.514 nvme0n1 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: ]] 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.514 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.774 nvme0n1 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: ]] 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.774 nvme0n1 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.774 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.033 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.034 nvme0n1 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.034 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.293 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.293 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.293 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.293 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.293 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.293 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:53.293 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.293 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:53.293 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.293 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:53.293 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:53.293 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:53.293 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: ]] 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.294 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.553 nvme0n1 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: ]] 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.553 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.813 nvme0n1 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: ]] 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.813 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.072 nvme0n1 00:34:54.072 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.072 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.072 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.072 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.072 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.072 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.072 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.072 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.072 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.072 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.072 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.072 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.072 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:54.072 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.072 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:54.072 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:54.072 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:54.072 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:34:54.072 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:34:54.072 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:54.072 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:54.072 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:34:54.072 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: ]] 00:34:54.072 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:34:54.072 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:54.072 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.073 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:54.073 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:54.073 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:54.073 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.073 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:54.073 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.073 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.073 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.073 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.073 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:54.073 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:54.073 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:54.073 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.073 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.073 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:54.073 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:54.073 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:54.073 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:54.073 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:54.073 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:54.073 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.073 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.333 nvme0n1 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.333 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.642 nvme0n1 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: ]] 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.642 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.212 nvme0n1 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: ]] 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.212 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.472 nvme0n1 00:34:55.472 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.472 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.472 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.472 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.472 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.472 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.472 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.472 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.472 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.472 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.472 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.472 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.472 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:55.472 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.472 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:55.472 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:55.472 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:55.472 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:34:55.472 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:34:55.472 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:55.472 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:55.472 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:34:55.732 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: ]] 00:34:55.732 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:34:55.732 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:55.732 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.732 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:55.732 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:55.732 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:55.732 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.732 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:55.732 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.732 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.732 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.732 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.732 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:55.732 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:55.732 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:55.732 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.732 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.732 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:55.732 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.732 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:55.732 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:55.732 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:55.732 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:55.732 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.732 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.992 nvme0n1 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: ]] 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.992 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.561 nvme0n1 00:34:56.561 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.561 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.561 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.561 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.561 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.561 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.561 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.561 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.561 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.561 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.561 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.561 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.562 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:56.562 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.562 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:56.562 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:56.562 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:56.562 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:34:56.562 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:56.562 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:56.562 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:56.562 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:34:56.562 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:56.562 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:56.562 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.562 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:56.562 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:56.562 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:56.562 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.562 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:56.562 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.562 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.562 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.562 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.562 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:56.562 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:56.562 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:56.562 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.562 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.563 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:56.563 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.563 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:56.563 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:56.563 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:56.563 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:56.563 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.563 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.824 nvme0n1 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: ]] 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.824 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.392 nvme0n1 00:34:57.392 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.392 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: ]] 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.393 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.651 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.651 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:57.651 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:57.651 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:57.651 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.651 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.651 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:57.651 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.651 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:57.651 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:57.651 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:57.651 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:57.651 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.651 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.219 nvme0n1 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: ]] 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:58.219 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.220 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:58.220 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.220 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.220 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.220 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.220 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:58.220 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:58.220 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:58.220 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.220 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.220 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:58.220 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.220 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:58.220 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:58.220 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:58.220 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:58.220 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.220 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.789 nvme0n1 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: ]] 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.789 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.359 nvme0n1 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.359 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.928 nvme0n1 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: ]] 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.928 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.929 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.929 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.929 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:59.929 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:59.929 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:59.929 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.929 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.929 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:59.929 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.929 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:59.929 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:59.929 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:59.929 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:59.929 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.929 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.188 nvme0n1 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: ]] 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:00.188 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.189 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:00.189 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.189 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.189 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.189 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.189 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:00.189 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:00.189 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:00.189 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.189 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.189 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:00.189 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.189 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:00.189 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:00.189 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:00.189 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:00.189 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.189 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.189 nvme0n1 00:35:00.189 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.189 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.189 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.189 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.447 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: ]] 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.448 nvme0n1 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.448 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: ]] 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.707 nvme0n1 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.707 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:00.708 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.708 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.708 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.708 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.708 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:00.708 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:00.708 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:00.708 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.708 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.708 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:00.708 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.708 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:00.708 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:00.708 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:00.708 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:00.708 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.708 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.966 nvme0n1 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: ]] 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.966 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.224 nvme0n1 00:35:01.224 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.224 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.224 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.224 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.224 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.224 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.224 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.224 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.224 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.224 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: ]] 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:01.224 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:01.225 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.225 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.225 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:01.225 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.225 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:01.225 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:01.225 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:01.225 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:01.225 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.225 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.483 nvme0n1 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: ]] 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.483 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.742 nvme0n1 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: ]] 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.742 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.001 nvme0n1 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.001 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.261 nvme0n1 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: ]] 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.261 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.520 nvme0n1 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: ]] 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:02.520 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:02.521 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:02.521 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.521 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.521 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:02.521 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.521 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:02.521 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:02.521 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:02.521 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:02.521 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.521 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.780 nvme0n1 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: ]] 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.780 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.040 nvme0n1 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: ]] 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.040 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.299 nvme0n1 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.299 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.558 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.558 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.558 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:03.558 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:03.558 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:03.558 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.558 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.558 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:03.558 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.558 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:03.558 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:03.558 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:03.558 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:03.558 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.558 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.558 nvme0n1 00:35:03.558 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.558 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.558 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.558 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.558 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: ]] 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.818 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.078 nvme0n1 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: ]] 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.078 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.647 nvme0n1 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: ]] 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.647 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.970 nvme0n1 00:35:04.970 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.970 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: ]] 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.971 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.539 nvme0n1 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.539 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.798 nvme0n1 00:35:05.798 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.798 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.798 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.798 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.798 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.798 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.798 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.798 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.798 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.798 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.798 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.798 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:05.798 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.798 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:35:05.798 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhiMDRlYmJiZjA0MjY1OWYyZGQwYzQzN2NmYmVmOGbjO3f5: 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: ]] 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFjMzZiYzljNjBlZTk5OTYxNzA2ODY0Yzk1OGYzZTI4MzZlYTA1YjA4YmNmMTFlNjMyMjc4MmI5ZjQwNmQxOJqlxTY=: 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.799 14:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.367 nvme0n1 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: ]] 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.367 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.938 nvme0n1 00:35:06.938 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.938 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.938 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.938 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.938 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiNjkwZDBkYzg2ODExNWE2NDJhZmVlMmQwYmQ2NGGLAyK0: 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: ]] 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3ZTJkMDFlNDQ4MDc5ZmZlZTA1MWE4YTE3OTE1ZGL092ec: 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.197 14:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.766 nvme0n1 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTgwMDVmMTgyY2MxN2EzYzJhZDY1N2IwNTVkYzY0ZmFiNDBmMmIxMGJmZTA1YmI2q2FMwA==: 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: ]] 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI4ODI0YjIwNzhkMDljNzcxMzBmYmZjZGJkZTZiMzEgZbgp: 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:07.766 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:07.767 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:07.767 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.767 14:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.335 nvme0n1 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTFiOTQ4MmY1Nzg5YWJjOTljY2FjZDUzYTJkYzg0NTc4N2Y3MGI4ZTFjMDg5N2UyMThlNjRiZmRjYzZjMTU0OVHm/es=: 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:08.335 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.336 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.336 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.336 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.336 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:08.336 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:08.336 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:08.336 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.336 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.336 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:08.336 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.336 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:08.336 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:08.336 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:08.336 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:08.336 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.336 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.902 nvme0n1 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM3MjkzZTJhM2E4OThhMWJlOTQxMTFjZGQ4ODg0YjYzZGQ5M2RhZThkMzVkODVj0qTXRA==: 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: ]] 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmQzODFmZmQ1MjkyZDdlYzVhNjE2N2NiZWQ4ZDgyMzJiODc3ZGJhNDFmNDFkZGY4aQD+8A==: 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.902 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.161 request: 00:35:09.161 { 00:35:09.161 "name": "nvme0", 00:35:09.161 "trtype": "tcp", 00:35:09.161 "traddr": "10.0.0.1", 00:35:09.161 "adrfam": "ipv4", 00:35:09.161 "trsvcid": "4420", 00:35:09.161 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:09.161 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:09.161 "prchk_reftag": false, 00:35:09.161 "prchk_guard": false, 00:35:09.161 "hdgst": false, 00:35:09.161 "ddgst": false, 00:35:09.161 "method": "bdev_nvme_attach_controller", 00:35:09.161 "req_id": 1 00:35:09.161 } 00:35:09.161 Got JSON-RPC error response 00:35:09.161 response: 00:35:09.161 { 00:35:09.161 "code": -5, 00:35:09.161 "message": "Input/output error" 00:35:09.161 } 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.161 request: 00:35:09.161 { 00:35:09.161 "name": "nvme0", 00:35:09.161 "trtype": "tcp", 00:35:09.161 "traddr": "10.0.0.1", 00:35:09.161 "adrfam": "ipv4", 00:35:09.161 "trsvcid": "4420", 00:35:09.161 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:09.161 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:09.161 "prchk_reftag": false, 00:35:09.161 "prchk_guard": false, 00:35:09.161 "hdgst": false, 00:35:09.161 "ddgst": false, 00:35:09.161 "dhchap_key": "key2", 00:35:09.161 "method": "bdev_nvme_attach_controller", 00:35:09.161 "req_id": 1 00:35:09.161 } 00:35:09.161 Got JSON-RPC error response 00:35:09.161 response: 00:35:09.161 { 00:35:09.161 "code": -5, 00:35:09.161 "message": "Input/output error" 00:35:09.161 } 00:35:09.161 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.162 14:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.421 request: 00:35:09.421 { 00:35:09.421 "name": "nvme0", 00:35:09.421 "trtype": "tcp", 00:35:09.421 "traddr": "10.0.0.1", 00:35:09.421 "adrfam": "ipv4", 00:35:09.421 "trsvcid": "4420", 00:35:09.421 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:09.421 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:09.421 "prchk_reftag": false, 00:35:09.421 "prchk_guard": false, 00:35:09.421 "hdgst": false, 00:35:09.421 "ddgst": false, 00:35:09.422 "dhchap_key": "key1", 00:35:09.422 "dhchap_ctrlr_key": "ckey2", 00:35:09.422 "method": "bdev_nvme_attach_controller", 00:35:09.422 "req_id": 1 00:35:09.422 } 00:35:09.422 Got JSON-RPC error response 00:35:09.422 response: 00:35:09.422 { 00:35:09.422 "code": -5, 00:35:09.422 "message": "Input/output error" 00:35:09.422 } 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:09.422 rmmod nvme_tcp 00:35:09.422 rmmod nvme_fabrics 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 486961 ']' 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 486961 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 486961 ']' 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 486961 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 486961 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 486961' 00:35:09.422 killing process with pid 486961 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 486961 00:35:09.422 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 486961 00:35:09.681 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:09.681 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:09.681 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:09.681 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:09.681 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:09.681 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:09.681 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:09.681 14:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:11.586 14:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:11.586 14:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:11.586 14:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:11.586 14:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:35:11.586 14:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:35:11.586 14:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:35:11.586 14:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:11.586 14:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:11.586 14:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:11.586 14:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:11.586 14:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:35:11.586 14:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:35:11.586 14:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:14.879 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:14.879 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:14.879 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:14.879 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:14.879 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:14.879 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:14.879 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:14.879 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:14.879 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:14.879 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:14.879 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:14.879 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:14.879 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:14.879 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:14.879 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:14.879 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:16.258 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:35:16.258 14:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.i4U /tmp/spdk.key-null.13z /tmp/spdk.key-sha256.AZA /tmp/spdk.key-sha384.NOZ /tmp/spdk.key-sha512.lx4 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:35:16.258 14:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:19.548 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:35:19.548 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:35:19.548 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:35:19.548 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:35:19.548 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:35:19.548 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:35:19.548 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:35:19.548 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:35:19.548 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:35:19.548 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:35:19.548 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:35:19.548 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:35:19.548 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:35:19.548 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:35:19.548 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:35:19.548 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:35:19.548 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:19.548 00:35:19.548 real 0m51.440s 00:35:19.548 user 0m43.575s 00:35:19.548 sys 0m14.408s 00:35:19.548 14:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:19.548 14:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.548 ************************************ 00:35:19.548 END TEST nvmf_auth_host 00:35:19.548 ************************************ 00:35:19.548 14:03:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:35:19.548 14:03:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:19.548 14:03:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:19.548 14:03:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:19.548 14:03:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.548 ************************************ 00:35:19.548 START TEST nvmf_digest 00:35:19.548 ************************************ 00:35:19.548 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:19.808 * Looking for test storage... 00:35:19.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:35:19.808 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:26.471 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:26.471 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:26.471 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:26.472 Found net devices under 0000:af:00.0: cvl_0_0 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:26.472 Found net devices under 0000:af:00.1: cvl_0_1 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:26.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:26.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:35:26.472 00:35:26.472 --- 10.0.0.2 ping statistics --- 00:35:26.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.472 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:26.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:26.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:35:26.472 00:35:26.472 --- 10.0.0.1 ping statistics --- 00:35:26.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.472 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:26.472 ************************************ 00:35:26.472 START TEST nvmf_digest_clean 00:35:26.472 ************************************ 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=500772 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 500772 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 500772 ']' 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:26.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:26.472 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:26.472 [2024-07-25 14:03:22.989812] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:35:26.473 [2024-07-25 14:03:22.989860] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:26.473 EAL: No free 2048 kB hugepages reported on node 1 00:35:26.473 [2024-07-25 14:03:23.032675] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:26.473 [2024-07-25 14:03:23.067379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:26.473 [2024-07-25 14:03:23.106259] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:26.473 [2024-07-25 14:03:23.106300] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:26.473 [2024-07-25 14:03:23.106309] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:26.473 [2024-07-25 14:03:23.106317] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:26.473 [2024-07-25 14:03:23.106324] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:26.473 [2024-07-25 14:03:23.106343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:27.041 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:27.041 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:27.041 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:27.041 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:27.041 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:27.041 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:27.041 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:27.041 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:27.041 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:27.041 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.041 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:27.041 null0 00:35:27.041 [2024-07-25 14:03:23.908521] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:27.301 [2024-07-25 14:03:23.932729] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:27.301 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.301 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:27.301 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:27.301 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:27.301 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:27.301 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:27.301 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:27.301 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:27.301 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=500917 00:35:27.301 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 500917 /var/tmp/bperf.sock 00:35:27.301 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 500917 ']' 00:35:27.301 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:27.301 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:27.301 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:27.301 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:27.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:27.301 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:27.301 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:27.301 [2024-07-25 14:03:23.969279] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:35:27.301 [2024-07-25 14:03:23.969327] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid500917 ] 00:35:27.301 EAL: No free 2048 kB hugepages reported on node 1 00:35:27.301 [2024-07-25 14:03:24.005937] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:27.301 [2024-07-25 14:03:24.040465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.301 [2024-07-25 14:03:24.078071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:27.301 14:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:27.301 14:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:27.301 14:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:27.301 14:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:27.301 14:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:27.561 14:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:27.561 14:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:27.820 nvme0n1 00:35:27.820 14:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:27.820 14:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:27.820 Running I/O for 2 seconds... 00:35:30.356 00:35:30.356 Latency(us) 00:35:30.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:30.356 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:30.356 nvme0n1 : 2.00 28603.15 111.73 0.00 0.00 4470.29 2162.69 12478.05 00:35:30.356 =================================================================================================================== 00:35:30.356 Total : 28603.15 111.73 0.00 0.00 4470.29 2162.69 12478.05 00:35:30.356 0 00:35:30.356 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:30.356 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:30.356 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:30.356 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:30.356 | select(.opcode=="crc32c") 00:35:30.356 | "\(.module_name) \(.executed)"' 00:35:30.356 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:30.356 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:30.356 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:30.356 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:30.356 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:30.356 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 500917 00:35:30.356 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 500917 ']' 00:35:30.356 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 500917 00:35:30.356 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:30.356 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:30.357 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 500917 00:35:30.357 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:30.357 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:30.357 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 500917' 00:35:30.357 killing process with pid 500917 00:35:30.357 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 500917 00:35:30.357 Received shutdown signal, test time was about 2.000000 seconds 00:35:30.357 00:35:30.357 Latency(us) 00:35:30.357 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:30.357 =================================================================================================================== 00:35:30.357 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:30.357 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 500917 00:35:30.357 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:30.357 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:30.357 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:30.357 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:30.357 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:30.357 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:30.357 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:30.357 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=501455 00:35:30.357 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 501455 /var/tmp/bperf.sock 00:35:30.357 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:30.357 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 501455 ']' 00:35:30.357 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:30.357 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:30.357 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:30.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:30.357 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:30.357 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:30.357 [2024-07-25 14:03:27.170765] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:35:30.357 [2024-07-25 14:03:27.170820] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid501455 ] 00:35:30.357 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:30.357 Zero copy mechanism will not be used. 00:35:30.357 EAL: No free 2048 kB hugepages reported on node 1 00:35:30.357 [2024-07-25 14:03:27.206279] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:30.357 [2024-07-25 14:03:27.241400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:30.617 [2024-07-25 14:03:27.276316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:30.617 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:30.617 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:30.617 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:30.617 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:30.617 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:30.876 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:30.876 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:31.135 nvme0n1 00:35:31.135 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:31.135 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:31.135 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:31.135 Zero copy mechanism will not be used. 00:35:31.135 Running I/O for 2 seconds... 00:35:33.671 00:35:33.671 Latency(us) 00:35:33.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:33.671 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:33.671 nvme0n1 : 2.00 4173.86 521.73 0.00 0.00 3830.18 943.72 9804.19 00:35:33.671 =================================================================================================================== 00:35:33.671 Total : 4173.86 521.73 0.00 0.00 3830.18 943.72 9804.19 00:35:33.671 0 00:35:33.671 14:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:33.671 14:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:33.671 14:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:33.671 14:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:33.671 | select(.opcode=="crc32c") 00:35:33.671 | "\(.module_name) \(.executed)"' 00:35:33.671 14:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 501455 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 501455 ']' 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 501455 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 501455 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 501455' 00:35:33.671 killing process with pid 501455 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 501455 00:35:33.671 Received shutdown signal, test time was about 2.000000 seconds 00:35:33.671 00:35:33.671 Latency(us) 00:35:33.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:33.671 =================================================================================================================== 00:35:33.671 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 501455 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=501989 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 501989 /var/tmp/bperf.sock 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 501989 ']' 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:33.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:33.671 [2024-07-25 14:03:30.410267] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:35:33.671 [2024-07-25 14:03:30.410322] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid501989 ] 00:35:33.671 EAL: No free 2048 kB hugepages reported on node 1 00:35:33.671 [2024-07-25 14:03:30.446564] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:33.671 [2024-07-25 14:03:30.481606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:33.671 [2024-07-25 14:03:30.520989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:33.671 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:33.930 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:33.930 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:34.189 nvme0n1 00:35:34.448 14:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:34.448 14:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:34.448 Running I/O for 2 seconds... 00:35:36.359 00:35:36.359 Latency(us) 00:35:36.359 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:36.359 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:36.359 nvme0n1 : 2.00 28393.46 110.91 0.00 0.00 4500.24 1979.19 9909.04 00:35:36.359 =================================================================================================================== 00:35:36.359 Total : 28393.46 110.91 0.00 0.00 4500.24 1979.19 9909.04 00:35:36.359 0 00:35:36.359 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:36.359 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:36.359 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:36.359 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:36.359 | select(.opcode=="crc32c") 00:35:36.359 | "\(.module_name) \(.executed)"' 00:35:36.359 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:36.620 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:36.620 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:36.620 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:36.620 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:36.620 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 501989 00:35:36.620 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 501989 ']' 00:35:36.620 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 501989 00:35:36.620 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:36.620 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:36.620 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 501989 00:35:36.620 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:36.620 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:36.620 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 501989' 00:35:36.620 killing process with pid 501989 00:35:36.620 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 501989 00:35:36.620 Received shutdown signal, test time was about 2.000000 seconds 00:35:36.620 00:35:36.620 Latency(us) 00:35:36.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:36.620 =================================================================================================================== 00:35:36.620 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:36.620 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 501989 00:35:36.880 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:36.880 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:36.880 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:36.880 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:36.880 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:36.880 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:36.880 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:36.880 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=502530 00:35:36.880 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 502530 /var/tmp/bperf.sock 00:35:36.880 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:36.880 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 502530 ']' 00:35:36.880 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:36.880 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:36.880 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:36.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:36.880 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:36.880 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:36.880 [2024-07-25 14:03:33.637893] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:35:36.880 [2024-07-25 14:03:33.637947] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid502530 ] 00:35:36.880 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:36.880 Zero copy mechanism will not be used. 00:35:36.880 EAL: No free 2048 kB hugepages reported on node 1 00:35:36.880 [2024-07-25 14:03:33.673205] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:36.880 [2024-07-25 14:03:33.707478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:36.880 [2024-07-25 14:03:33.741806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:37.140 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:37.140 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:37.140 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:37.140 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:37.140 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:37.140 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:37.140 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:37.708 nvme0n1 00:35:37.709 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:37.709 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:37.709 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:37.709 Zero copy mechanism will not be used. 00:35:37.709 Running I/O for 2 seconds... 00:35:39.615 00:35:39.615 Latency(us) 00:35:39.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:39.615 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:39.615 nvme0n1 : 2.00 4597.69 574.71 0.00 0.00 3475.11 2411.72 25375.54 00:35:39.615 =================================================================================================================== 00:35:39.615 Total : 4597.69 574.71 0.00 0.00 3475.11 2411.72 25375.54 00:35:39.615 0 00:35:39.615 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:39.615 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:39.615 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:39.615 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:39.615 | select(.opcode=="crc32c") 00:35:39.615 | "\(.module_name) \(.executed)"' 00:35:39.615 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:39.874 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:39.874 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:39.874 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:39.874 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:39.874 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 502530 00:35:39.874 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 502530 ']' 00:35:39.874 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 502530 00:35:39.874 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:39.874 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:39.874 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 502530 00:35:39.874 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:39.874 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:39.874 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 502530' 00:35:39.874 killing process with pid 502530 00:35:39.874 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 502530 00:35:39.874 Received shutdown signal, test time was about 2.000000 seconds 00:35:39.874 00:35:39.874 Latency(us) 00:35:39.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:39.874 =================================================================================================================== 00:35:39.874 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:39.874 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 502530 00:35:40.133 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 500772 00:35:40.133 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 500772 ']' 00:35:40.133 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 500772 00:35:40.133 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:40.133 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:40.133 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 500772 00:35:40.133 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:40.133 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:40.133 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 500772' 00:35:40.133 killing process with pid 500772 00:35:40.133 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 500772 00:35:40.133 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 500772 00:35:40.392 00:35:40.392 real 0m14.138s 00:35:40.392 user 0m25.687s 00:35:40.392 sys 0m4.909s 00:35:40.392 14:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:40.392 14:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:40.392 ************************************ 00:35:40.392 END TEST nvmf_digest_clean 00:35:40.392 ************************************ 00:35:40.392 14:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:40.392 14:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:40.392 14:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:40.392 14:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:40.392 ************************************ 00:35:40.392 START TEST nvmf_digest_error 00:35:40.392 ************************************ 00:35:40.392 14:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:35:40.392 14:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:40.392 14:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:40.392 14:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:40.392 14:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:40.392 14:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=503091 00:35:40.392 14:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 503091 00:35:40.392 14:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 503091 ']' 00:35:40.392 14:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:40.392 14:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:40.392 14:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:40.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:40.392 14:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:40.392 14:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:40.392 14:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:40.392 [2024-07-25 14:03:37.204927] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:35:40.392 [2024-07-25 14:03:37.204974] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:40.392 EAL: No free 2048 kB hugepages reported on node 1 00:35:40.392 [2024-07-25 14:03:37.244723] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:40.392 [2024-07-25 14:03:37.280085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:40.651 [2024-07-25 14:03:37.318829] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:40.651 [2024-07-25 14:03:37.318868] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:40.651 [2024-07-25 14:03:37.318878] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:40.651 [2024-07-25 14:03:37.318886] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:40.651 [2024-07-25 14:03:37.318893] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:40.651 [2024-07-25 14:03:37.318915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:41.220 14:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:41.220 14:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:41.220 14:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:41.220 14:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:41.220 14:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:41.220 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:41.220 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:41.220 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.220 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:41.220 [2024-07-25 14:03:38.028994] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:41.220 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.220 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:41.220 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:41.220 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.220 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:41.220 null0 00:35:41.480 [2024-07-25 14:03:38.111955] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:41.480 [2024-07-25 14:03:38.136152] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:41.480 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.480 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:41.480 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:41.480 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:41.480 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:41.480 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:41.480 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=503369 00:35:41.480 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 503369 /var/tmp/bperf.sock 00:35:41.480 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 503369 ']' 00:35:41.480 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:41.480 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:41.480 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:41.480 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:41.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:41.480 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:41.480 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:41.480 [2024-07-25 14:03:38.172402] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:35:41.480 [2024-07-25 14:03:38.172449] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid503369 ] 00:35:41.480 EAL: No free 2048 kB hugepages reported on node 1 00:35:41.480 [2024-07-25 14:03:38.207854] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:41.480 [2024-07-25 14:03:38.242829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:41.480 [2024-07-25 14:03:38.281375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:41.480 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:41.480 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:41.480 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:41.480 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:41.739 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:41.739 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.739 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:41.739 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.739 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:41.739 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:42.001 nvme0n1 00:35:42.001 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:42.001 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.001 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:42.001 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.001 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:42.001 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:42.324 Running I/O for 2 seconds... 00:35:42.324 [2024-07-25 14:03:38.930592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.324 [2024-07-25 14:03:38.930628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.324 [2024-07-25 14:03:38.930641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.324 [2024-07-25 14:03:38.940166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.324 [2024-07-25 14:03:38.940193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.324 [2024-07-25 14:03:38.940205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.324 [2024-07-25 14:03:38.948441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.324 [2024-07-25 14:03:38.948465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.324 [2024-07-25 14:03:38.948477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.324 [2024-07-25 14:03:38.958570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.324 [2024-07-25 14:03:38.958593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.324 [2024-07-25 14:03:38.958604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.324 [2024-07-25 14:03:38.967921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.324 [2024-07-25 14:03:38.967944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.324 [2024-07-25 14:03:38.967955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.324 [2024-07-25 14:03:38.976037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.324 [2024-07-25 14:03:38.976060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.324 [2024-07-25 14:03:38.976071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.324 [2024-07-25 14:03:38.984720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.324 [2024-07-25 14:03:38.984742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.324 [2024-07-25 14:03:38.984753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.324 [2024-07-25 14:03:38.994426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.324 [2024-07-25 14:03:38.994453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.324 [2024-07-25 14:03:38.994463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.324 [2024-07-25 14:03:39.002635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.324 [2024-07-25 14:03:39.002657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.324 [2024-07-25 14:03:39.002667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.324 [2024-07-25 14:03:39.011333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.324 [2024-07-25 14:03:39.011355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.324 [2024-07-25 14:03:39.011365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.324 [2024-07-25 14:03:39.020537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.324 [2024-07-25 14:03:39.020560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.324 [2024-07-25 14:03:39.020571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.324 [2024-07-25 14:03:39.030203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.324 [2024-07-25 14:03:39.030225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.324 [2024-07-25 14:03:39.030235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.324 [2024-07-25 14:03:39.038247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.324 [2024-07-25 14:03:39.038270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.324 [2024-07-25 14:03:39.038281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.324 [2024-07-25 14:03:39.047680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.324 [2024-07-25 14:03:39.047701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.324 [2024-07-25 14:03:39.047712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.324 [2024-07-25 14:03:39.056096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.324 [2024-07-25 14:03:39.056117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.324 [2024-07-25 14:03:39.056128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.324 [2024-07-25 14:03:39.065506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.324 [2024-07-25 14:03:39.065528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.324 [2024-07-25 14:03:39.065538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.324 [2024-07-25 14:03:39.073920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.324 [2024-07-25 14:03:39.073943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.324 [2024-07-25 14:03:39.073953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.324 [2024-07-25 14:03:39.082633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.324 [2024-07-25 14:03:39.082655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.324 [2024-07-25 14:03:39.082665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.324 [2024-07-25 14:03:39.092056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.324 [2024-07-25 14:03:39.092079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.324 [2024-07-25 14:03:39.092089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.324 [2024-07-25 14:03:39.100110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.324 [2024-07-25 14:03:39.100132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.324 [2024-07-25 14:03:39.100143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.324 [2024-07-25 14:03:39.109951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.324 [2024-07-25 14:03:39.109973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.324 [2024-07-25 14:03:39.109984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.324 [2024-07-25 14:03:39.117887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.324 [2024-07-25 14:03:39.117909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.324 [2024-07-25 14:03:39.117919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.324 [2024-07-25 14:03:39.127414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.324 [2024-07-25 14:03:39.127436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.324 [2024-07-25 14:03:39.127446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.324 [2024-07-25 14:03:39.136033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.324 [2024-07-25 14:03:39.136056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.325 [2024-07-25 14:03:39.136066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.325 [2024-07-25 14:03:39.145367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.325 [2024-07-25 14:03:39.145390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.325 [2024-07-25 14:03:39.145404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.325 [2024-07-25 14:03:39.153202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.325 [2024-07-25 14:03:39.153225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.325 [2024-07-25 14:03:39.153236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.325 [2024-07-25 14:03:39.162952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.325 [2024-07-25 14:03:39.162975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.325 [2024-07-25 14:03:39.162985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.325 [2024-07-25 14:03:39.172288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.325 [2024-07-25 14:03:39.172310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.325 [2024-07-25 14:03:39.172320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.325 [2024-07-25 14:03:39.180489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.325 [2024-07-25 14:03:39.180510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.325 [2024-07-25 14:03:39.180521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.325 [2024-07-25 14:03:39.189937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.325 [2024-07-25 14:03:39.189959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.325 [2024-07-25 14:03:39.189969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.584 [2024-07-25 14:03:39.199429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.584 [2024-07-25 14:03:39.199451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.584 [2024-07-25 14:03:39.199462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.584 [2024-07-25 14:03:39.208778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.584 [2024-07-25 14:03:39.208800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.584 [2024-07-25 14:03:39.208810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.584 [2024-07-25 14:03:39.216903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.584 [2024-07-25 14:03:39.216925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.584 [2024-07-25 14:03:39.216936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.584 [2024-07-25 14:03:39.225568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.584 [2024-07-25 14:03:39.225594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.584 [2024-07-25 14:03:39.225605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.584 [2024-07-25 14:03:39.235160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.584 [2024-07-25 14:03:39.235182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.584 [2024-07-25 14:03:39.235193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.584 [2024-07-25 14:03:39.243639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.584 [2024-07-25 14:03:39.243661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.584 [2024-07-25 14:03:39.243672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.584 [2024-07-25 14:03:39.252774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.585 [2024-07-25 14:03:39.252796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.585 [2024-07-25 14:03:39.252806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.585 [2024-07-25 14:03:39.261428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.585 [2024-07-25 14:03:39.261449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.585 [2024-07-25 14:03:39.261460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.585 [2024-07-25 14:03:39.270279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.585 [2024-07-25 14:03:39.270302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.585 [2024-07-25 14:03:39.270312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.585 [2024-07-25 14:03:39.279364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.585 [2024-07-25 14:03:39.279386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.585 [2024-07-25 14:03:39.279396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.585 [2024-07-25 14:03:39.288128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.585 [2024-07-25 14:03:39.288150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.585 [2024-07-25 14:03:39.288161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.585 [2024-07-25 14:03:39.296227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.585 [2024-07-25 14:03:39.296248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.585 [2024-07-25 14:03:39.296259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.585 [2024-07-25 14:03:39.305952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.585 [2024-07-25 14:03:39.305974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.585 [2024-07-25 14:03:39.305984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.585 [2024-07-25 14:03:39.315701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.585 [2024-07-25 14:03:39.315729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.585 [2024-07-25 14:03:39.315740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.585 [2024-07-25 14:03:39.323067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.585 [2024-07-25 14:03:39.323089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.585 [2024-07-25 14:03:39.323099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.585 [2024-07-25 14:03:39.333071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.585 [2024-07-25 14:03:39.333093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.585 [2024-07-25 14:03:39.333104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.585 [2024-07-25 14:03:39.341779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.585 [2024-07-25 14:03:39.341801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.585 [2024-07-25 14:03:39.341811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.585 [2024-07-25 14:03:39.350473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.585 [2024-07-25 14:03:39.350495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.585 [2024-07-25 14:03:39.350505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.585 [2024-07-25 14:03:39.360111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.585 [2024-07-25 14:03:39.360134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.585 [2024-07-25 14:03:39.360144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.585 [2024-07-25 14:03:39.369372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.585 [2024-07-25 14:03:39.369393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.585 [2024-07-25 14:03:39.369404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.585 [2024-07-25 14:03:39.377940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.585 [2024-07-25 14:03:39.377961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.585 [2024-07-25 14:03:39.377976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.585 [2024-07-25 14:03:39.387675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.585 [2024-07-25 14:03:39.387697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.585 [2024-07-25 14:03:39.387708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.585 [2024-07-25 14:03:39.395553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.585 [2024-07-25 14:03:39.395575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.585 [2024-07-25 14:03:39.395585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.585 [2024-07-25 14:03:39.405304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.585 [2024-07-25 14:03:39.405326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.585 [2024-07-25 14:03:39.405336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.585 [2024-07-25 14:03:39.413506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.585 [2024-07-25 14:03:39.413528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.585 [2024-07-25 14:03:39.413539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.585 [2024-07-25 14:03:39.422628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.585 [2024-07-25 14:03:39.422650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.585 [2024-07-25 14:03:39.422660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.585 [2024-07-25 14:03:39.431559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.585 [2024-07-25 14:03:39.431581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.585 [2024-07-25 14:03:39.431591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.585 [2024-07-25 14:03:39.440552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.585 [2024-07-25 14:03:39.440574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.585 [2024-07-25 14:03:39.440585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.585 [2024-07-25 14:03:39.448850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.585 [2024-07-25 14:03:39.448872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.585 [2024-07-25 14:03:39.448882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.585 [2024-07-25 14:03:39.459213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.585 [2024-07-25 14:03:39.459235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.585 [2024-07-25 14:03:39.459246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.585 [2024-07-25 14:03:39.466700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.585 [2024-07-25 14:03:39.466729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.585 [2024-07-25 14:03:39.466740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.844 [2024-07-25 14:03:39.476690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.844 [2024-07-25 14:03:39.476713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.844 [2024-07-25 14:03:39.476730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.844 [2024-07-25 14:03:39.484448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.844 [2024-07-25 14:03:39.484471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.844 [2024-07-25 14:03:39.484481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.844 [2024-07-25 14:03:39.494591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.844 [2024-07-25 14:03:39.494613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.844 [2024-07-25 14:03:39.494623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.844 [2024-07-25 14:03:39.503091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.844 [2024-07-25 14:03:39.503114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.844 [2024-07-25 14:03:39.503125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.844 [2024-07-25 14:03:39.512127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.844 [2024-07-25 14:03:39.512150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.844 [2024-07-25 14:03:39.512160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.844 [2024-07-25 14:03:39.520722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.844 [2024-07-25 14:03:39.520743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.844 [2024-07-25 14:03:39.520754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.844 [2024-07-25 14:03:39.530728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.844 [2024-07-25 14:03:39.530750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.844 [2024-07-25 14:03:39.530764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.844 [2024-07-25 14:03:39.537939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.844 [2024-07-25 14:03:39.537961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.844 [2024-07-25 14:03:39.537971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.844 [2024-07-25 14:03:39.547673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.844 [2024-07-25 14:03:39.547695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.844 [2024-07-25 14:03:39.547705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.844 [2024-07-25 14:03:39.557786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.844 [2024-07-25 14:03:39.557808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.844 [2024-07-25 14:03:39.557818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.844 [2024-07-25 14:03:39.565769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.844 [2024-07-25 14:03:39.565790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.844 [2024-07-25 14:03:39.565801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.844 [2024-07-25 14:03:39.577106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.844 [2024-07-25 14:03:39.577128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.844 [2024-07-25 14:03:39.577139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.844 [2024-07-25 14:03:39.585131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.844 [2024-07-25 14:03:39.585153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.844 [2024-07-25 14:03:39.585164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.844 [2024-07-25 14:03:39.594298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.844 [2024-07-25 14:03:39.594320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.844 [2024-07-25 14:03:39.594330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.844 [2024-07-25 14:03:39.603117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.844 [2024-07-25 14:03:39.603139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.844 [2024-07-25 14:03:39.603149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.844 [2024-07-25 14:03:39.611856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.844 [2024-07-25 14:03:39.611884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.844 [2024-07-25 14:03:39.611894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.844 [2024-07-25 14:03:39.620342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.845 [2024-07-25 14:03:39.620364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.845 [2024-07-25 14:03:39.620374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.845 [2024-07-25 14:03:39.630228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.845 [2024-07-25 14:03:39.630250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.845 [2024-07-25 14:03:39.630261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.845 [2024-07-25 14:03:39.638281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.845 [2024-07-25 14:03:39.638303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.845 [2024-07-25 14:03:39.638313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.845 [2024-07-25 14:03:39.648054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.845 [2024-07-25 14:03:39.648076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.845 [2024-07-25 14:03:39.648087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.845 [2024-07-25 14:03:39.655759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.845 [2024-07-25 14:03:39.655781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.845 [2024-07-25 14:03:39.655791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.845 [2024-07-25 14:03:39.666876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.845 [2024-07-25 14:03:39.666899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.845 [2024-07-25 14:03:39.666909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.845 [2024-07-25 14:03:39.674181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.845 [2024-07-25 14:03:39.674203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.845 [2024-07-25 14:03:39.674214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.845 [2024-07-25 14:03:39.683475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.845 [2024-07-25 14:03:39.683498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.845 [2024-07-25 14:03:39.683508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.845 [2024-07-25 14:03:39.692810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.845 [2024-07-25 14:03:39.692833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.845 [2024-07-25 14:03:39.692844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.845 [2024-07-25 14:03:39.702444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.845 [2024-07-25 14:03:39.702468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.845 [2024-07-25 14:03:39.702479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.845 [2024-07-25 14:03:39.710607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.845 [2024-07-25 14:03:39.710630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.845 [2024-07-25 14:03:39.710640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.845 [2024-07-25 14:03:39.719781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.845 [2024-07-25 14:03:39.719804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.845 [2024-07-25 14:03:39.719814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.845 [2024-07-25 14:03:39.728501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:42.845 [2024-07-25 14:03:39.728524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.845 [2024-07-25 14:03:39.728535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.105 [2024-07-25 14:03:39.737751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.105 [2024-07-25 14:03:39.737773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.105 [2024-07-25 14:03:39.737784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.105 [2024-07-25 14:03:39.745631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.105 [2024-07-25 14:03:39.745653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.105 [2024-07-25 14:03:39.745663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.105 [2024-07-25 14:03:39.755576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.105 [2024-07-25 14:03:39.755598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.105 [2024-07-25 14:03:39.755608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.105 [2024-07-25 14:03:39.764548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.105 [2024-07-25 14:03:39.764569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.105 [2024-07-25 14:03:39.764583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.105 [2024-07-25 14:03:39.772949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.105 [2024-07-25 14:03:39.772970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.105 [2024-07-25 14:03:39.772981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.105 [2024-07-25 14:03:39.782918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.105 [2024-07-25 14:03:39.782941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.105 [2024-07-25 14:03:39.782951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.105 [2024-07-25 14:03:39.790659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.105 [2024-07-25 14:03:39.790681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.105 [2024-07-25 14:03:39.790692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.105 [2024-07-25 14:03:39.800507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.105 [2024-07-25 14:03:39.800528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.105 [2024-07-25 14:03:39.800538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.105 [2024-07-25 14:03:39.809338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.105 [2024-07-25 14:03:39.809360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.105 [2024-07-25 14:03:39.809370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.105 [2024-07-25 14:03:39.817205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.105 [2024-07-25 14:03:39.817227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.105 [2024-07-25 14:03:39.817237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.105 [2024-07-25 14:03:39.827171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.105 [2024-07-25 14:03:39.827193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.105 [2024-07-25 14:03:39.827204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.105 [2024-07-25 14:03:39.834706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.105 [2024-07-25 14:03:39.834733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.105 [2024-07-25 14:03:39.834743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.105 [2024-07-25 14:03:39.844741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.105 [2024-07-25 14:03:39.844762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.105 [2024-07-25 14:03:39.844773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.105 [2024-07-25 14:03:39.853006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.105 [2024-07-25 14:03:39.853027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.105 [2024-07-25 14:03:39.853037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.105 [2024-07-25 14:03:39.862349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.105 [2024-07-25 14:03:39.862372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.105 [2024-07-25 14:03:39.862382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.105 [2024-07-25 14:03:39.872084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.105 [2024-07-25 14:03:39.872105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.105 [2024-07-25 14:03:39.872116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.105 [2024-07-25 14:03:39.879930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.105 [2024-07-25 14:03:39.879952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.105 [2024-07-25 14:03:39.879963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.105 [2024-07-25 14:03:39.889780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.105 [2024-07-25 14:03:39.889802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.105 [2024-07-25 14:03:39.889813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.105 [2024-07-25 14:03:39.898173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.105 [2024-07-25 14:03:39.898194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.105 [2024-07-25 14:03:39.898205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.106 [2024-07-25 14:03:39.907408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.106 [2024-07-25 14:03:39.907430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.106 [2024-07-25 14:03:39.907440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.106 [2024-07-25 14:03:39.916653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.106 [2024-07-25 14:03:39.916674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.106 [2024-07-25 14:03:39.916688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.106 [2024-07-25 14:03:39.925511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.106 [2024-07-25 14:03:39.925533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.106 [2024-07-25 14:03:39.925543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.106 [2024-07-25 14:03:39.934899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.106 [2024-07-25 14:03:39.934920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.106 [2024-07-25 14:03:39.934931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.106 [2024-07-25 14:03:39.942948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.106 [2024-07-25 14:03:39.942970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.106 [2024-07-25 14:03:39.942982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.106 [2024-07-25 14:03:39.952433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.106 [2024-07-25 14:03:39.952455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.106 [2024-07-25 14:03:39.952466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.106 [2024-07-25 14:03:39.961680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.106 [2024-07-25 14:03:39.961701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.106 [2024-07-25 14:03:39.961712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.106 [2024-07-25 14:03:39.970610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.106 [2024-07-25 14:03:39.970631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.106 [2024-07-25 14:03:39.970642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.106 [2024-07-25 14:03:39.979099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.106 [2024-07-25 14:03:39.979120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.106 [2024-07-25 14:03:39.979131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.106 [2024-07-25 14:03:39.987557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.106 [2024-07-25 14:03:39.987579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.106 [2024-07-25 14:03:39.987590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.365 [2024-07-25 14:03:39.996849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.365 [2024-07-25 14:03:39.996874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.365 [2024-07-25 14:03:39.996885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.365 [2024-07-25 14:03:40.006285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.365 [2024-07-25 14:03:40.006308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.365 [2024-07-25 14:03:40.006319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.365 [2024-07-25 14:03:40.014380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.365 [2024-07-25 14:03:40.014403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.365 [2024-07-25 14:03:40.014413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.365 [2024-07-25 14:03:40.023634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.365 [2024-07-25 14:03:40.023656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.365 [2024-07-25 14:03:40.023668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.365 [2024-07-25 14:03:40.033590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.365 [2024-07-25 14:03:40.033615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.365 [2024-07-25 14:03:40.033627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.365 [2024-07-25 14:03:40.042414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.365 [2024-07-25 14:03:40.042436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.365 [2024-07-25 14:03:40.042447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.365 [2024-07-25 14:03:40.050076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.365 [2024-07-25 14:03:40.050098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.365 [2024-07-25 14:03:40.050109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.365 [2024-07-25 14:03:40.060064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.365 [2024-07-25 14:03:40.060085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.365 [2024-07-25 14:03:40.060095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.365 [2024-07-25 14:03:40.069060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.365 [2024-07-25 14:03:40.069083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.365 [2024-07-25 14:03:40.069093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.365 [2024-07-25 14:03:40.078850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.365 [2024-07-25 14:03:40.078872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.366 [2024-07-25 14:03:40.078882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.366 [2024-07-25 14:03:40.086567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.366 [2024-07-25 14:03:40.086590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.366 [2024-07-25 14:03:40.086600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.366 [2024-07-25 14:03:40.095866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.366 [2024-07-25 14:03:40.095887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.366 [2024-07-25 14:03:40.095898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.366 [2024-07-25 14:03:40.104789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.366 [2024-07-25 14:03:40.104811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.366 [2024-07-25 14:03:40.104822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.366 [2024-07-25 14:03:40.113323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.366 [2024-07-25 14:03:40.113345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.366 [2024-07-25 14:03:40.113355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.366 [2024-07-25 14:03:40.122839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.366 [2024-07-25 14:03:40.122861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.366 [2024-07-25 14:03:40.122871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.366 [2024-07-25 14:03:40.132810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.366 [2024-07-25 14:03:40.132832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.366 [2024-07-25 14:03:40.132843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.366 [2024-07-25 14:03:40.140678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.366 [2024-07-25 14:03:40.140700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.366 [2024-07-25 14:03:40.140710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.366 [2024-07-25 14:03:40.150223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.366 [2024-07-25 14:03:40.150245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.366 [2024-07-25 14:03:40.150258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.366 [2024-07-25 14:03:40.158412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.366 [2024-07-25 14:03:40.158434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.366 [2024-07-25 14:03:40.158444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.366 [2024-07-25 14:03:40.167868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.366 [2024-07-25 14:03:40.167890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.366 [2024-07-25 14:03:40.167900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.366 [2024-07-25 14:03:40.175550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.366 [2024-07-25 14:03:40.175572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.366 [2024-07-25 14:03:40.175582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.366 [2024-07-25 14:03:40.186448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.366 [2024-07-25 14:03:40.186470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.366 [2024-07-25 14:03:40.186482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.366 [2024-07-25 14:03:40.194139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.366 [2024-07-25 14:03:40.194161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.366 [2024-07-25 14:03:40.194171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.366 [2024-07-25 14:03:40.203373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.366 [2024-07-25 14:03:40.203394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.366 [2024-07-25 14:03:40.203405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.366 [2024-07-25 14:03:40.212801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.366 [2024-07-25 14:03:40.212823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.366 [2024-07-25 14:03:40.212834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.366 [2024-07-25 14:03:40.221121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.366 [2024-07-25 14:03:40.221142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.366 [2024-07-25 14:03:40.221153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.366 [2024-07-25 14:03:40.230544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.366 [2024-07-25 14:03:40.230566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.366 [2024-07-25 14:03:40.230576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.366 [2024-07-25 14:03:40.239578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.366 [2024-07-25 14:03:40.239600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.366 [2024-07-25 14:03:40.239610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.366 [2024-07-25 14:03:40.247179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.366 [2024-07-25 14:03:40.247201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.366 [2024-07-25 14:03:40.247211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.626 [2024-07-25 14:03:40.257389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.626 [2024-07-25 14:03:40.257411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.626 [2024-07-25 14:03:40.257422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.626 [2024-07-25 14:03:40.265755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.626 [2024-07-25 14:03:40.265777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.626 [2024-07-25 14:03:40.265787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.626 [2024-07-25 14:03:40.274617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.626 [2024-07-25 14:03:40.274639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.626 [2024-07-25 14:03:40.274649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.626 [2024-07-25 14:03:40.283562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.626 [2024-07-25 14:03:40.283583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.626 [2024-07-25 14:03:40.283594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.626 [2024-07-25 14:03:40.291625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.626 [2024-07-25 14:03:40.291646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.626 [2024-07-25 14:03:40.291656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.626 [2024-07-25 14:03:40.302086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.626 [2024-07-25 14:03:40.302108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.626 [2024-07-25 14:03:40.302122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.626 [2024-07-25 14:03:40.310615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.626 [2024-07-25 14:03:40.310638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.626 [2024-07-25 14:03:40.310648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.626 [2024-07-25 14:03:40.319388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.626 [2024-07-25 14:03:40.319409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.626 [2024-07-25 14:03:40.319419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.626 [2024-07-25 14:03:40.328561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.626 [2024-07-25 14:03:40.328583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.626 [2024-07-25 14:03:40.328593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.626 [2024-07-25 14:03:40.337130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.626 [2024-07-25 14:03:40.337151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.626 [2024-07-25 14:03:40.337162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.626 [2024-07-25 14:03:40.346485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.626 [2024-07-25 14:03:40.346507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.626 [2024-07-25 14:03:40.346518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.626 [2024-07-25 14:03:40.354267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.626 [2024-07-25 14:03:40.354289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.626 [2024-07-25 14:03:40.354300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.626 [2024-07-25 14:03:40.364000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.626 [2024-07-25 14:03:40.364024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.626 [2024-07-25 14:03:40.364035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.626 [2024-07-25 14:03:40.373201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.626 [2024-07-25 14:03:40.373223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.626 [2024-07-25 14:03:40.373233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.626 [2024-07-25 14:03:40.381134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.626 [2024-07-25 14:03:40.381159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.626 [2024-07-25 14:03:40.381170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.626 [2024-07-25 14:03:40.390406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.626 [2024-07-25 14:03:40.390428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.626 [2024-07-25 14:03:40.390439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.626 [2024-07-25 14:03:40.398364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.626 [2024-07-25 14:03:40.398388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.626 [2024-07-25 14:03:40.398399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.626 [2024-07-25 14:03:40.408614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.626 [2024-07-25 14:03:40.408637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.626 [2024-07-25 14:03:40.408647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.626 [2024-07-25 14:03:40.417655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.626 [2024-07-25 14:03:40.417677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.626 [2024-07-25 14:03:40.417689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.626 [2024-07-25 14:03:40.426304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.626 [2024-07-25 14:03:40.426327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.626 [2024-07-25 14:03:40.426338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.626 [2024-07-25 14:03:40.435234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.626 [2024-07-25 14:03:40.435257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.626 [2024-07-25 14:03:40.435268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.626 [2024-07-25 14:03:40.443732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.626 [2024-07-25 14:03:40.443753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.626 [2024-07-25 14:03:40.443764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.626 [2024-07-25 14:03:40.453212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.626 [2024-07-25 14:03:40.453234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.626 [2024-07-25 14:03:40.453245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.626 [2024-07-25 14:03:40.461758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.626 [2024-07-25 14:03:40.461780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.626 [2024-07-25 14:03:40.461790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.626 [2024-07-25 14:03:40.471594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.626 [2024-07-25 14:03:40.471616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.626 [2024-07-25 14:03:40.471626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.626 [2024-07-25 14:03:40.479290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.626 [2024-07-25 14:03:40.479312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.627 [2024-07-25 14:03:40.479323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.627 [2024-07-25 14:03:40.489422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.627 [2024-07-25 14:03:40.489444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.627 [2024-07-25 14:03:40.489455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.627 [2024-07-25 14:03:40.497363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.627 [2024-07-25 14:03:40.497385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.627 [2024-07-25 14:03:40.497395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.627 [2024-07-25 14:03:40.507101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.627 [2024-07-25 14:03:40.507123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.627 [2024-07-25 14:03:40.507133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.887 [2024-07-25 14:03:40.516659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.887 [2024-07-25 14:03:40.516682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.887 [2024-07-25 14:03:40.516693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.887 [2024-07-25 14:03:40.525051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.887 [2024-07-25 14:03:40.525074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.887 [2024-07-25 14:03:40.525084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.887 [2024-07-25 14:03:40.533647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.887 [2024-07-25 14:03:40.533669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.887 [2024-07-25 14:03:40.533683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.887 [2024-07-25 14:03:40.543217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.887 [2024-07-25 14:03:40.543239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.887 [2024-07-25 14:03:40.543250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.887 [2024-07-25 14:03:40.551780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.887 [2024-07-25 14:03:40.551802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.887 [2024-07-25 14:03:40.551813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.887 [2024-07-25 14:03:40.560517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.887 [2024-07-25 14:03:40.560540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.887 [2024-07-25 14:03:40.560550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.887 [2024-07-25 14:03:40.570446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.887 [2024-07-25 14:03:40.570468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.887 [2024-07-25 14:03:40.570479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.887 [2024-07-25 14:03:40.578296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.887 [2024-07-25 14:03:40.578318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.887 [2024-07-25 14:03:40.578329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.887 [2024-07-25 14:03:40.587710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.887 [2024-07-25 14:03:40.587738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.887 [2024-07-25 14:03:40.587749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.887 [2024-07-25 14:03:40.597212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.888 [2024-07-25 14:03:40.597233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.888 [2024-07-25 14:03:40.597243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.888 [2024-07-25 14:03:40.605464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.888 [2024-07-25 14:03:40.605486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.888 [2024-07-25 14:03:40.605496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.888 [2024-07-25 14:03:40.614450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.888 [2024-07-25 14:03:40.614476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.888 [2024-07-25 14:03:40.614486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.888 [2024-07-25 14:03:40.622877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.888 [2024-07-25 14:03:40.622899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.888 [2024-07-25 14:03:40.622909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.888 [2024-07-25 14:03:40.632936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.888 [2024-07-25 14:03:40.632958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.888 [2024-07-25 14:03:40.632969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.888 [2024-07-25 14:03:40.640695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.888 [2024-07-25 14:03:40.640723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.888 [2024-07-25 14:03:40.640734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.888 [2024-07-25 14:03:40.650495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.888 [2024-07-25 14:03:40.650517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.888 [2024-07-25 14:03:40.650527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.888 [2024-07-25 14:03:40.659020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.888 [2024-07-25 14:03:40.659042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.888 [2024-07-25 14:03:40.659053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.888 [2024-07-25 14:03:40.667396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.888 [2024-07-25 14:03:40.667418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.888 [2024-07-25 14:03:40.667429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.888 [2024-07-25 14:03:40.676935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.888 [2024-07-25 14:03:40.676957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.888 [2024-07-25 14:03:40.676967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.888 [2024-07-25 14:03:40.685053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.888 [2024-07-25 14:03:40.685074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.888 [2024-07-25 14:03:40.685085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.888 [2024-07-25 14:03:40.694701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.888 [2024-07-25 14:03:40.694729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.888 [2024-07-25 14:03:40.694739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.888 [2024-07-25 14:03:40.703611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.888 [2024-07-25 14:03:40.703634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.888 [2024-07-25 14:03:40.703644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.888 [2024-07-25 14:03:40.712841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.888 [2024-07-25 14:03:40.712864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.888 [2024-07-25 14:03:40.712874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.888 [2024-07-25 14:03:40.721367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.888 [2024-07-25 14:03:40.721389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.888 [2024-07-25 14:03:40.721400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.888 [2024-07-25 14:03:40.728887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.888 [2024-07-25 14:03:40.728909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.888 [2024-07-25 14:03:40.728919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.888 [2024-07-25 14:03:40.739309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.888 [2024-07-25 14:03:40.739332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.888 [2024-07-25 14:03:40.739343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.888 [2024-07-25 14:03:40.748691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.888 [2024-07-25 14:03:40.748719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.888 [2024-07-25 14:03:40.748730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.888 [2024-07-25 14:03:40.755927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.888 [2024-07-25 14:03:40.755949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.888 [2024-07-25 14:03:40.755959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:43.888 [2024-07-25 14:03:40.766038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:43.888 [2024-07-25 14:03:40.766065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.888 [2024-07-25 14:03:40.766075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:44.148 [2024-07-25 14:03:40.775284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:44.148 [2024-07-25 14:03:40.775307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.148 [2024-07-25 14:03:40.775318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:44.148 [2024-07-25 14:03:40.783437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:44.148 [2024-07-25 14:03:40.783459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.148 [2024-07-25 14:03:40.783469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:44.148 [2024-07-25 14:03:40.792882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:44.148 [2024-07-25 14:03:40.792905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.148 [2024-07-25 14:03:40.792915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:44.148 [2024-07-25 14:03:40.802747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:44.148 [2024-07-25 14:03:40.802770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.148 [2024-07-25 14:03:40.802781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:44.148 [2024-07-25 14:03:40.810864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:44.148 [2024-07-25 14:03:40.810887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.148 [2024-07-25 14:03:40.810897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:44.148 [2024-07-25 14:03:40.819916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:44.148 [2024-07-25 14:03:40.819938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.148 [2024-07-25 14:03:40.819950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:44.148 [2024-07-25 14:03:40.829273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:44.148 [2024-07-25 14:03:40.829295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.148 [2024-07-25 14:03:40.829306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:44.148 [2024-07-25 14:03:40.837941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:44.148 [2024-07-25 14:03:40.837964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.148 [2024-07-25 14:03:40.837974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:44.149 [2024-07-25 14:03:40.846776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:44.149 [2024-07-25 14:03:40.846797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.149 [2024-07-25 14:03:40.846808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:44.149 [2024-07-25 14:03:40.855257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:44.149 [2024-07-25 14:03:40.855280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.149 [2024-07-25 14:03:40.855290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:44.149 [2024-07-25 14:03:40.863790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:44.149 [2024-07-25 14:03:40.863812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.149 [2024-07-25 14:03:40.863823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:44.149 [2024-07-25 14:03:40.873369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:44.149 [2024-07-25 14:03:40.873391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.149 [2024-07-25 14:03:40.873402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:44.149 [2024-07-25 14:03:40.882640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:44.149 [2024-07-25 14:03:40.882663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.149 [2024-07-25 14:03:40.882673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:44.149 [2024-07-25 14:03:40.890687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:44.149 [2024-07-25 14:03:40.890709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.149 [2024-07-25 14:03:40.890725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:44.149 [2024-07-25 14:03:40.900334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:44.149 [2024-07-25 14:03:40.900356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.149 [2024-07-25 14:03:40.900366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:44.149 [2024-07-25 14:03:40.908443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:44.149 [2024-07-25 14:03:40.908465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.149 [2024-07-25 14:03:40.908476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:44.149 [2024-07-25 14:03:40.918104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x201a800) 00:35:44.149 [2024-07-25 14:03:40.918126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.149 [2024-07-25 14:03:40.918140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:44.149 00:35:44.149 Latency(us) 00:35:44.149 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:44.149 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:44.149 nvme0n1 : 2.00 28452.10 111.14 0.00 0.00 4493.37 2280.65 12425.63 00:35:44.149 =================================================================================================================== 00:35:44.149 Total : 28452.10 111.14 0.00 0.00 4493.37 2280.65 12425.63 00:35:44.149 0 00:35:44.149 14:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:44.149 14:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:44.149 | .driver_specific 00:35:44.149 | .nvme_error 00:35:44.149 | .status_code 00:35:44.149 | .command_transient_transport_error' 00:35:44.149 14:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:44.149 14:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:44.409 14:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 223 > 0 )) 00:35:44.409 14:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 503369 00:35:44.409 14:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 503369 ']' 00:35:44.409 14:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 503369 00:35:44.409 14:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:44.409 14:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:44.409 14:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 503369 00:35:44.409 14:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:44.409 14:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:44.409 14:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 503369' 00:35:44.409 killing process with pid 503369 00:35:44.409 14:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 503369 00:35:44.409 Received shutdown signal, test time was about 2.000000 seconds 00:35:44.409 00:35:44.409 Latency(us) 00:35:44.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:44.409 =================================================================================================================== 00:35:44.409 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:44.409 14:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 503369 00:35:44.668 14:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:44.668 14:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:44.668 14:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:44.668 14:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:44.668 14:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:44.668 14:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=503906 00:35:44.668 14:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 503906 /var/tmp/bperf.sock 00:35:44.668 14:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 503906 ']' 00:35:44.668 14:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:44.668 14:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:44.668 14:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:44.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:44.668 14:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:44.668 14:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:44.668 14:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:44.668 [2024-07-25 14:03:41.392115] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:35:44.669 [2024-07-25 14:03:41.392171] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid503906 ] 00:35:44.669 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:44.669 Zero copy mechanism will not be used. 00:35:44.669 EAL: No free 2048 kB hugepages reported on node 1 00:35:44.669 [2024-07-25 14:03:41.429021] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:44.669 [2024-07-25 14:03:41.462796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:44.669 [2024-07-25 14:03:41.499973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:45.605 14:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:45.605 14:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:45.605 14:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:45.605 14:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:45.605 14:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:45.605 14:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.605 14:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:45.605 14:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.605 14:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:45.605 14:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:45.864 nvme0n1 00:35:45.864 14:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:45.864 14:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.864 14:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:45.864 14:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.864 14:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:45.864 14:03:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:45.864 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:45.864 Zero copy mechanism will not be used. 00:35:45.864 Running I/O for 2 seconds... 00:35:45.864 [2024-07-25 14:03:42.725180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:45.865 [2024-07-25 14:03:42.725215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.865 [2024-07-25 14:03:42.725228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:45.865 [2024-07-25 14:03:42.736797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:45.865 [2024-07-25 14:03:42.736824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.865 [2024-07-25 14:03:42.736835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:45.865 [2024-07-25 14:03:42.747016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:45.865 [2024-07-25 14:03:42.747039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.865 [2024-07-25 14:03:42.747050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.125 [2024-07-25 14:03:42.757182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.125 [2024-07-25 14:03:42.757206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.125 [2024-07-25 14:03:42.757217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.125 [2024-07-25 14:03:42.766733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.125 [2024-07-25 14:03:42.766757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.125 [2024-07-25 14:03:42.766768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.125 [2024-07-25 14:03:42.776989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.125 [2024-07-25 14:03:42.777014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.125 [2024-07-25 14:03:42.777025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.125 [2024-07-25 14:03:42.788633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.125 [2024-07-25 14:03:42.788658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.125 [2024-07-25 14:03:42.788669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.125 [2024-07-25 14:03:42.799659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.125 [2024-07-25 14:03:42.799682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.125 [2024-07-25 14:03:42.799697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.125 [2024-07-25 14:03:42.809030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.125 [2024-07-25 14:03:42.809052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.125 [2024-07-25 14:03:42.809063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.125 [2024-07-25 14:03:42.817496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.125 [2024-07-25 14:03:42.817520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.125 [2024-07-25 14:03:42.817531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.125 [2024-07-25 14:03:42.825728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.125 [2024-07-25 14:03:42.825750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.125 [2024-07-25 14:03:42.825760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.125 [2024-07-25 14:03:42.834080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.125 [2024-07-25 14:03:42.834102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.125 [2024-07-25 14:03:42.834113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.125 [2024-07-25 14:03:42.842764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.125 [2024-07-25 14:03:42.842787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.125 [2024-07-25 14:03:42.842798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.125 [2024-07-25 14:03:42.851349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.125 [2024-07-25 14:03:42.851372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.125 [2024-07-25 14:03:42.851383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.125 [2024-07-25 14:03:42.859503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.125 [2024-07-25 14:03:42.859526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.125 [2024-07-25 14:03:42.859537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.125 [2024-07-25 14:03:42.867320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.125 [2024-07-25 14:03:42.867343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.125 [2024-07-25 14:03:42.867353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.125 [2024-07-25 14:03:42.875228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.125 [2024-07-25 14:03:42.875255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.125 [2024-07-25 14:03:42.875265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.125 [2024-07-25 14:03:42.883369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.125 [2024-07-25 14:03:42.883392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.125 [2024-07-25 14:03:42.883402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.125 [2024-07-25 14:03:42.891411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.125 [2024-07-25 14:03:42.891434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.125 [2024-07-25 14:03:42.891445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.125 [2024-07-25 14:03:42.899217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.125 [2024-07-25 14:03:42.899240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.125 [2024-07-25 14:03:42.899251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.125 [2024-07-25 14:03:42.906767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.125 [2024-07-25 14:03:42.906790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.125 [2024-07-25 14:03:42.906801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.125 [2024-07-25 14:03:42.913702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.125 [2024-07-25 14:03:42.913730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.125 [2024-07-25 14:03:42.913741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.125 [2024-07-25 14:03:42.920398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.125 [2024-07-25 14:03:42.920421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.125 [2024-07-25 14:03:42.920432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.125 [2024-07-25 14:03:42.926829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.125 [2024-07-25 14:03:42.926853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.125 [2024-07-25 14:03:42.926863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.125 [2024-07-25 14:03:42.933530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.125 [2024-07-25 14:03:42.933553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.125 [2024-07-25 14:03:42.933563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.125 [2024-07-25 14:03:42.940182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.125 [2024-07-25 14:03:42.940205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.125 [2024-07-25 14:03:42.940216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.125 [2024-07-25 14:03:42.946873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.125 [2024-07-25 14:03:42.946896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.125 [2024-07-25 14:03:42.946906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.125 [2024-07-25 14:03:42.953096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.125 [2024-07-25 14:03:42.953119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.125 [2024-07-25 14:03:42.953130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.125 [2024-07-25 14:03:42.959721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.125 [2024-07-25 14:03:42.959744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.125 [2024-07-25 14:03:42.959754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.126 [2024-07-25 14:03:42.966988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.126 [2024-07-25 14:03:42.967011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.126 [2024-07-25 14:03:42.967021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.126 [2024-07-25 14:03:42.975661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.126 [2024-07-25 14:03:42.975685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.126 [2024-07-25 14:03:42.975696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.126 [2024-07-25 14:03:42.984528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.126 [2024-07-25 14:03:42.984552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.126 [2024-07-25 14:03:42.984562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.126 [2024-07-25 14:03:42.992574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.126 [2024-07-25 14:03:42.992597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.126 [2024-07-25 14:03:42.992607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.126 [2024-07-25 14:03:43.000894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.126 [2024-07-25 14:03:43.000918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.126 [2024-07-25 14:03:43.000932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.126 [2024-07-25 14:03:43.010330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.126 [2024-07-25 14:03:43.010355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.126 [2024-07-25 14:03:43.010366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.386 [2024-07-25 14:03:43.020944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.386 [2024-07-25 14:03:43.020968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.386 [2024-07-25 14:03:43.020978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.386 [2024-07-25 14:03:43.030288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.386 [2024-07-25 14:03:43.030310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.386 [2024-07-25 14:03:43.030321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.386 [2024-07-25 14:03:43.039849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.386 [2024-07-25 14:03:43.039872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.386 [2024-07-25 14:03:43.039883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.386 [2024-07-25 14:03:43.048616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.386 [2024-07-25 14:03:43.048640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.386 [2024-07-25 14:03:43.048651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.386 [2024-07-25 14:03:43.057875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.386 [2024-07-25 14:03:43.057902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.386 [2024-07-25 14:03:43.057913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.386 [2024-07-25 14:03:43.066864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.386 [2024-07-25 14:03:43.066888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.386 [2024-07-25 14:03:43.066898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.386 [2024-07-25 14:03:43.076282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.386 [2024-07-25 14:03:43.076306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.386 [2024-07-25 14:03:43.076318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.386 [2024-07-25 14:03:43.085696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.386 [2024-07-25 14:03:43.085734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.386 [2024-07-25 14:03:43.085745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.386 [2024-07-25 14:03:43.095516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.386 [2024-07-25 14:03:43.095540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.386 [2024-07-25 14:03:43.095551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.386 [2024-07-25 14:03:43.104657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.386 [2024-07-25 14:03:43.104681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.386 [2024-07-25 14:03:43.104692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.386 [2024-07-25 14:03:43.113759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.386 [2024-07-25 14:03:43.113783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.386 [2024-07-25 14:03:43.113794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.386 [2024-07-25 14:03:43.121688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.386 [2024-07-25 14:03:43.121712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.386 [2024-07-25 14:03:43.121729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.386 [2024-07-25 14:03:43.129573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.386 [2024-07-25 14:03:43.129598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.386 [2024-07-25 14:03:43.129608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.386 [2024-07-25 14:03:43.137779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.386 [2024-07-25 14:03:43.137802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.386 [2024-07-25 14:03:43.137813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.386 [2024-07-25 14:03:43.145647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.386 [2024-07-25 14:03:43.145671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.386 [2024-07-25 14:03:43.145682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.386 [2024-07-25 14:03:43.154528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.386 [2024-07-25 14:03:43.154552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.386 [2024-07-25 14:03:43.154563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.386 [2024-07-25 14:03:43.163156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.387 [2024-07-25 14:03:43.163180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.387 [2024-07-25 14:03:43.163190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.387 [2024-07-25 14:03:43.171531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.387 [2024-07-25 14:03:43.171554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.387 [2024-07-25 14:03:43.171564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.387 [2024-07-25 14:03:43.179719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.387 [2024-07-25 14:03:43.179741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.387 [2024-07-25 14:03:43.179751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.387 [2024-07-25 14:03:43.187852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.387 [2024-07-25 14:03:43.187876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.387 [2024-07-25 14:03:43.187886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.387 [2024-07-25 14:03:43.196211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.387 [2024-07-25 14:03:43.196233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.387 [2024-07-25 14:03:43.196244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.387 [2024-07-25 14:03:43.204729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.387 [2024-07-25 14:03:43.204752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.387 [2024-07-25 14:03:43.204763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.387 [2024-07-25 14:03:43.212413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.387 [2024-07-25 14:03:43.212436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.387 [2024-07-25 14:03:43.212446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.387 [2024-07-25 14:03:43.220151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.387 [2024-07-25 14:03:43.220174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.387 [2024-07-25 14:03:43.220185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.387 [2024-07-25 14:03:43.227844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.387 [2024-07-25 14:03:43.227866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.387 [2024-07-25 14:03:43.227880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.387 [2024-07-25 14:03:43.236169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.387 [2024-07-25 14:03:43.236193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.387 [2024-07-25 14:03:43.236203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.387 [2024-07-25 14:03:43.244062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.387 [2024-07-25 14:03:43.244085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.387 [2024-07-25 14:03:43.244096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.387 [2024-07-25 14:03:43.252082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.387 [2024-07-25 14:03:43.252105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.387 [2024-07-25 14:03:43.252116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.387 [2024-07-25 14:03:43.259778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.387 [2024-07-25 14:03:43.259801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.387 [2024-07-25 14:03:43.259812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.387 [2024-07-25 14:03:43.267920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.387 [2024-07-25 14:03:43.267944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.387 [2024-07-25 14:03:43.267971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.646 [2024-07-25 14:03:43.275701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.646 [2024-07-25 14:03:43.275730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.646 [2024-07-25 14:03:43.275741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.646 [2024-07-25 14:03:43.282923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.647 [2024-07-25 14:03:43.282945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.647 [2024-07-25 14:03:43.282955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.647 [2024-07-25 14:03:43.290239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.647 [2024-07-25 14:03:43.290262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.647 [2024-07-25 14:03:43.290272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.647 [2024-07-25 14:03:43.298001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.647 [2024-07-25 14:03:43.298024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.647 [2024-07-25 14:03:43.298035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.647 [2024-07-25 14:03:43.305920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.647 [2024-07-25 14:03:43.305943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.647 [2024-07-25 14:03:43.305954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.647 [2024-07-25 14:03:43.313627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.647 [2024-07-25 14:03:43.313651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.647 [2024-07-25 14:03:43.313662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.647 [2024-07-25 14:03:43.321576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.647 [2024-07-25 14:03:43.321599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.647 [2024-07-25 14:03:43.321610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.647 [2024-07-25 14:03:43.329383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.647 [2024-07-25 14:03:43.329407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.647 [2024-07-25 14:03:43.329417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.647 [2024-07-25 14:03:43.337185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.647 [2024-07-25 14:03:43.337208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.647 [2024-07-25 14:03:43.337219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.647 [2024-07-25 14:03:43.345176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.647 [2024-07-25 14:03:43.345200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.647 [2024-07-25 14:03:43.345211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.647 [2024-07-25 14:03:43.352672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.647 [2024-07-25 14:03:43.352696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.647 [2024-07-25 14:03:43.352707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.647 [2024-07-25 14:03:43.361212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.647 [2024-07-25 14:03:43.361236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.647 [2024-07-25 14:03:43.361250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.647 [2024-07-25 14:03:43.368927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.647 [2024-07-25 14:03:43.368950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.647 [2024-07-25 14:03:43.368960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.647 [2024-07-25 14:03:43.375987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.647 [2024-07-25 14:03:43.376010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.647 [2024-07-25 14:03:43.376021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.647 [2024-07-25 14:03:43.382736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.647 [2024-07-25 14:03:43.382759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.647 [2024-07-25 14:03:43.382769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.647 [2024-07-25 14:03:43.389427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.647 [2024-07-25 14:03:43.389450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.647 [2024-07-25 14:03:43.389461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.647 [2024-07-25 14:03:43.395983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.647 [2024-07-25 14:03:43.396006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.647 [2024-07-25 14:03:43.396016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.647 [2024-07-25 14:03:43.402579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.647 [2024-07-25 14:03:43.402602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.647 [2024-07-25 14:03:43.402612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.647 [2024-07-25 14:03:43.409031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.647 [2024-07-25 14:03:43.409054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.647 [2024-07-25 14:03:43.409065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.647 [2024-07-25 14:03:43.415722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.647 [2024-07-25 14:03:43.415745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.647 [2024-07-25 14:03:43.415756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.647 [2024-07-25 14:03:43.422271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.647 [2024-07-25 14:03:43.422299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.647 [2024-07-25 14:03:43.422310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.647 [2024-07-25 14:03:43.428882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.647 [2024-07-25 14:03:43.428905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.647 [2024-07-25 14:03:43.428915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.647 [2024-07-25 14:03:43.435485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.647 [2024-07-25 14:03:43.435508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.647 [2024-07-25 14:03:43.435518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.647 [2024-07-25 14:03:43.442066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.647 [2024-07-25 14:03:43.442087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.647 [2024-07-25 14:03:43.442098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.647 [2024-07-25 14:03:43.448599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.647 [2024-07-25 14:03:43.448622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.647 [2024-07-25 14:03:43.448632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.647 [2024-07-25 14:03:43.455015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.647 [2024-07-25 14:03:43.455037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.647 [2024-07-25 14:03:43.455047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.647 [2024-07-25 14:03:43.458482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.647 [2024-07-25 14:03:43.458504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.647 [2024-07-25 14:03:43.458514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.647 [2024-07-25 14:03:43.465188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.648 [2024-07-25 14:03:43.465211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.648 [2024-07-25 14:03:43.465221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.648 [2024-07-25 14:03:43.471820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.648 [2024-07-25 14:03:43.471842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.648 [2024-07-25 14:03:43.471853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.648 [2024-07-25 14:03:43.478362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.648 [2024-07-25 14:03:43.478384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.648 [2024-07-25 14:03:43.478394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.648 [2024-07-25 14:03:43.484999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.648 [2024-07-25 14:03:43.485021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.648 [2024-07-25 14:03:43.485032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.648 [2024-07-25 14:03:43.491605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.648 [2024-07-25 14:03:43.491628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.648 [2024-07-25 14:03:43.491639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.648 [2024-07-25 14:03:43.498170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.648 [2024-07-25 14:03:43.498192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.648 [2024-07-25 14:03:43.498202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.648 [2024-07-25 14:03:43.504736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.648 [2024-07-25 14:03:43.504758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.648 [2024-07-25 14:03:43.504768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.648 [2024-07-25 14:03:43.511418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.648 [2024-07-25 14:03:43.511440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.648 [2024-07-25 14:03:43.511450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.648 [2024-07-25 14:03:43.517925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.648 [2024-07-25 14:03:43.517947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.648 [2024-07-25 14:03:43.517957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.648 [2024-07-25 14:03:43.524555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.648 [2024-07-25 14:03:43.524577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.648 [2024-07-25 14:03:43.524587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.648 [2024-07-25 14:03:43.531185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.648 [2024-07-25 14:03:43.531208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.648 [2024-07-25 14:03:43.531222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.537843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.537865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.537877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.544432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.544455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.544466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.551099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.551121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.551131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.557271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.557293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.557304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.563769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.563791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.563801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.570305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.570328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.570338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.576850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.576872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.576883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.583435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.583457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.583467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.589948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.589974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.589984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.596541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.596563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.596573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.603081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.603102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.603112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.609635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.609658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.609668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.616222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.616245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.616256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.622831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.622853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.622863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.629398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.629421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.629431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.635759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.635783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.635795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.642379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.642403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.642413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.648914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.648937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.648948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.655423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.655444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.655455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.661938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.661961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.661972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.668453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.668477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.668488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.674966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.674989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.675000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.681490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.681514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.681524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.688049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.688072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.688082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.694627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.694650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.694662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.701232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.701258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.701269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.707893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.707917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.707928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.908 [2024-07-25 14:03:43.714574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.908 [2024-07-25 14:03:43.714598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.908 [2024-07-25 14:03:43.714608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.909 [2024-07-25 14:03:43.721167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.909 [2024-07-25 14:03:43.721190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.909 [2024-07-25 14:03:43.721201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.909 [2024-07-25 14:03:43.727648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.909 [2024-07-25 14:03:43.727670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.909 [2024-07-25 14:03:43.727680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.909 [2024-07-25 14:03:43.734149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.909 [2024-07-25 14:03:43.734173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.909 [2024-07-25 14:03:43.734184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.909 [2024-07-25 14:03:43.740689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.909 [2024-07-25 14:03:43.740718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.909 [2024-07-25 14:03:43.740730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.909 [2024-07-25 14:03:43.747759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.909 [2024-07-25 14:03:43.747782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.909 [2024-07-25 14:03:43.747793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.909 [2024-07-25 14:03:43.756594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.909 [2024-07-25 14:03:43.756618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.909 [2024-07-25 14:03:43.756629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.909 [2024-07-25 14:03:43.766005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.909 [2024-07-25 14:03:43.766030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.909 [2024-07-25 14:03:43.766040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:46.909 [2024-07-25 14:03:43.775403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.909 [2024-07-25 14:03:43.775427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.909 [2024-07-25 14:03:43.775438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:46.909 [2024-07-25 14:03:43.784551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.909 [2024-07-25 14:03:43.784575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.909 [2024-07-25 14:03:43.784586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.909 [2024-07-25 14:03:43.793682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:46.909 [2024-07-25 14:03:43.793707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.909 [2024-07-25 14:03:43.793724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.169 [2024-07-25 14:03:43.803038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.169 [2024-07-25 14:03:43.803063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.169 [2024-07-25 14:03:43.803073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.169 [2024-07-25 14:03:43.812080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.169 [2024-07-25 14:03:43.812104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.169 [2024-07-25 14:03:43.812115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.169 [2024-07-25 14:03:43.821464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.169 [2024-07-25 14:03:43.821489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.169 [2024-07-25 14:03:43.821500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.169 [2024-07-25 14:03:43.830894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.169 [2024-07-25 14:03:43.830918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.169 [2024-07-25 14:03:43.830929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.169 [2024-07-25 14:03:43.839939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.169 [2024-07-25 14:03:43.839962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.169 [2024-07-25 14:03:43.839977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.169 [2024-07-25 14:03:43.848959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.169 [2024-07-25 14:03:43.848982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.169 [2024-07-25 14:03:43.848993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.169 [2024-07-25 14:03:43.857941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.169 [2024-07-25 14:03:43.857965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.169 [2024-07-25 14:03:43.857976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.169 [2024-07-25 14:03:43.865962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.169 [2024-07-25 14:03:43.865985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.169 [2024-07-25 14:03:43.865995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.169 [2024-07-25 14:03:43.874632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.169 [2024-07-25 14:03:43.874656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.169 [2024-07-25 14:03:43.874667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.169 [2024-07-25 14:03:43.882526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.169 [2024-07-25 14:03:43.882549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.169 [2024-07-25 14:03:43.882560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.169 [2024-07-25 14:03:43.891121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.169 [2024-07-25 14:03:43.891145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.169 [2024-07-25 14:03:43.891157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.169 [2024-07-25 14:03:43.900068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.169 [2024-07-25 14:03:43.900092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.169 [2024-07-25 14:03:43.900103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.169 [2024-07-25 14:03:43.908494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.169 [2024-07-25 14:03:43.908518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.169 [2024-07-25 14:03:43.908529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.169 [2024-07-25 14:03:43.915905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.169 [2024-07-25 14:03:43.915932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.169 [2024-07-25 14:03:43.915943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.169 [2024-07-25 14:03:43.922983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.169 [2024-07-25 14:03:43.923006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.169 [2024-07-25 14:03:43.923017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.169 [2024-07-25 14:03:43.929581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.169 [2024-07-25 14:03:43.929604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.169 [2024-07-25 14:03:43.929615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.169 [2024-07-25 14:03:43.936204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.169 [2024-07-25 14:03:43.936226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.169 [2024-07-25 14:03:43.936237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.169 [2024-07-25 14:03:43.942772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.169 [2024-07-25 14:03:43.942795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.169 [2024-07-25 14:03:43.942805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.169 [2024-07-25 14:03:43.949352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.169 [2024-07-25 14:03:43.949374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.169 [2024-07-25 14:03:43.949384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.169 [2024-07-25 14:03:43.955898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.169 [2024-07-25 14:03:43.955921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.170 [2024-07-25 14:03:43.955931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.170 [2024-07-25 14:03:43.962460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.170 [2024-07-25 14:03:43.962482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.170 [2024-07-25 14:03:43.962493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.170 [2024-07-25 14:03:43.969018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.170 [2024-07-25 14:03:43.969041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.170 [2024-07-25 14:03:43.969052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.170 [2024-07-25 14:03:43.975531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.170 [2024-07-25 14:03:43.975555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.170 [2024-07-25 14:03:43.975565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.170 [2024-07-25 14:03:43.982052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.170 [2024-07-25 14:03:43.982076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.170 [2024-07-25 14:03:43.982090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.170 [2024-07-25 14:03:43.988579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.170 [2024-07-25 14:03:43.988602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.170 [2024-07-25 14:03:43.988612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.170 [2024-07-25 14:03:43.995192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.170 [2024-07-25 14:03:43.995216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.170 [2024-07-25 14:03:43.995226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.170 [2024-07-25 14:03:44.001814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.170 [2024-07-25 14:03:44.001837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.170 [2024-07-25 14:03:44.001847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.170 [2024-07-25 14:03:44.008376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.170 [2024-07-25 14:03:44.008399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.170 [2024-07-25 14:03:44.008409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.170 [2024-07-25 14:03:44.014997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.170 [2024-07-25 14:03:44.015020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.170 [2024-07-25 14:03:44.015030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.170 [2024-07-25 14:03:44.021556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.170 [2024-07-25 14:03:44.021579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.170 [2024-07-25 14:03:44.021590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.170 [2024-07-25 14:03:44.028089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.170 [2024-07-25 14:03:44.028112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.170 [2024-07-25 14:03:44.028126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.170 [2024-07-25 14:03:44.034617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.170 [2024-07-25 14:03:44.034639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.170 [2024-07-25 14:03:44.034650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.170 [2024-07-25 14:03:44.041130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.170 [2024-07-25 14:03:44.041153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.170 [2024-07-25 14:03:44.041164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.170 [2024-07-25 14:03:44.047655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.170 [2024-07-25 14:03:44.047677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.170 [2024-07-25 14:03:44.047688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.170 [2024-07-25 14:03:44.054145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.170 [2024-07-25 14:03:44.054168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.170 [2024-07-25 14:03:44.054179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.060654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.060677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.060688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.067215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.067238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.067248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.073735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.073757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.073767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.080192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.080214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.080224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.086672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.086694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.086705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.093140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.093163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.093173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.099695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.099722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.099732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.106241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.106263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.106273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.112737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.112759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.112769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.119213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.119236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.119246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.125734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.125757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.125768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.132284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.132306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.132317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.138830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.138852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.138865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.145372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.145395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.145405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.151908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.151930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.151940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.158384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.158407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.158417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.164903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.164925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.164935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.171411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.171434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.171444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.177936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.177958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.177968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.184573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.184596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.184607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.191103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.191124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.191134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.197655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.197681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.197692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.204096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.204120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.204130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.210580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.210603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.210614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.217041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.217064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.217074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.223567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.223589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.223600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.230089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.230111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.230120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.236610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.236632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.236642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.243164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.243187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.430 [2024-07-25 14:03:44.243197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.430 [2024-07-25 14:03:44.249741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.430 [2024-07-25 14:03:44.249764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.431 [2024-07-25 14:03:44.249775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.431 [2024-07-25 14:03:44.256250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.431 [2024-07-25 14:03:44.256273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.431 [2024-07-25 14:03:44.256283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.431 [2024-07-25 14:03:44.262789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.431 [2024-07-25 14:03:44.262812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.431 [2024-07-25 14:03:44.262822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.431 [2024-07-25 14:03:44.273331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.431 [2024-07-25 14:03:44.273354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.431 [2024-07-25 14:03:44.273364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.431 [2024-07-25 14:03:44.286021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.431 [2024-07-25 14:03:44.286045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.431 [2024-07-25 14:03:44.286055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.431 [2024-07-25 14:03:44.296130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.431 [2024-07-25 14:03:44.296153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.431 [2024-07-25 14:03:44.296164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.431 [2024-07-25 14:03:44.309455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.431 [2024-07-25 14:03:44.309479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.431 [2024-07-25 14:03:44.309489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.690 [2024-07-25 14:03:44.320367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.690 [2024-07-25 14:03:44.320390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.690 [2024-07-25 14:03:44.320400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.690 [2024-07-25 14:03:44.329469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.690 [2024-07-25 14:03:44.329491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.690 [2024-07-25 14:03:44.329501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.690 [2024-07-25 14:03:44.337646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.690 [2024-07-25 14:03:44.337669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.690 [2024-07-25 14:03:44.337684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.690 [2024-07-25 14:03:44.345304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.690 [2024-07-25 14:03:44.345326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.690 [2024-07-25 14:03:44.345337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.690 [2024-07-25 14:03:44.353650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.690 [2024-07-25 14:03:44.353673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.690 [2024-07-25 14:03:44.353684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.690 [2024-07-25 14:03:44.363231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.690 [2024-07-25 14:03:44.363255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.690 [2024-07-25 14:03:44.363266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.690 [2024-07-25 14:03:44.374158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.690 [2024-07-25 14:03:44.374181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.691 [2024-07-25 14:03:44.374191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.691 [2024-07-25 14:03:44.388209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.691 [2024-07-25 14:03:44.388232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.691 [2024-07-25 14:03:44.388243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.691 [2024-07-25 14:03:44.398695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.691 [2024-07-25 14:03:44.398731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.691 [2024-07-25 14:03:44.398742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.691 [2024-07-25 14:03:44.407114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.691 [2024-07-25 14:03:44.407136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.691 [2024-07-25 14:03:44.407146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.691 [2024-07-25 14:03:44.415649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.691 [2024-07-25 14:03:44.415672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.691 [2024-07-25 14:03:44.415682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.691 [2024-07-25 14:03:44.423347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.691 [2024-07-25 14:03:44.423373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.691 [2024-07-25 14:03:44.423383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.691 [2024-07-25 14:03:44.430988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.691 [2024-07-25 14:03:44.431011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.691 [2024-07-25 14:03:44.431021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.691 [2024-07-25 14:03:44.439237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.691 [2024-07-25 14:03:44.439259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.691 [2024-07-25 14:03:44.439270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.691 [2024-07-25 14:03:44.446422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.691 [2024-07-25 14:03:44.446448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.691 [2024-07-25 14:03:44.446459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.691 [2024-07-25 14:03:44.453178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.691 [2024-07-25 14:03:44.453201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.691 [2024-07-25 14:03:44.453211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.691 [2024-07-25 14:03:44.461446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.691 [2024-07-25 14:03:44.461468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.691 [2024-07-25 14:03:44.461479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.691 [2024-07-25 14:03:44.468429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.691 [2024-07-25 14:03:44.468451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.691 [2024-07-25 14:03:44.468462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.691 [2024-07-25 14:03:44.475350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.691 [2024-07-25 14:03:44.475373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.691 [2024-07-25 14:03:44.475384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.691 [2024-07-25 14:03:44.485836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.691 [2024-07-25 14:03:44.485860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.691 [2024-07-25 14:03:44.485873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.691 [2024-07-25 14:03:44.497493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.691 [2024-07-25 14:03:44.497517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.691 [2024-07-25 14:03:44.497527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.691 [2024-07-25 14:03:44.507601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.691 [2024-07-25 14:03:44.507623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.691 [2024-07-25 14:03:44.507634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.691 [2024-07-25 14:03:44.516064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.691 [2024-07-25 14:03:44.516086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.691 [2024-07-25 14:03:44.516097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.691 [2024-07-25 14:03:44.523507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.691 [2024-07-25 14:03:44.523530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.691 [2024-07-25 14:03:44.523540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.691 [2024-07-25 14:03:44.530663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.691 [2024-07-25 14:03:44.530686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.691 [2024-07-25 14:03:44.530696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.691 [2024-07-25 14:03:44.537622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.691 [2024-07-25 14:03:44.537644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.691 [2024-07-25 14:03:44.537655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.691 [2024-07-25 14:03:44.545372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.691 [2024-07-25 14:03:44.545394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.691 [2024-07-25 14:03:44.545405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.691 [2024-07-25 14:03:44.554249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.691 [2024-07-25 14:03:44.554272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.691 [2024-07-25 14:03:44.554282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.691 [2024-07-25 14:03:44.566889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.691 [2024-07-25 14:03:44.566915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.691 [2024-07-25 14:03:44.566925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.691 [2024-07-25 14:03:44.577633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.691 [2024-07-25 14:03:44.577656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.691 [2024-07-25 14:03:44.577667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.950 [2024-07-25 14:03:44.586306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.950 [2024-07-25 14:03:44.586329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.950 [2024-07-25 14:03:44.586339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.950 [2024-07-25 14:03:44.594070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.950 [2024-07-25 14:03:44.594092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.950 [2024-07-25 14:03:44.594103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.950 [2024-07-25 14:03:44.601903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.950 [2024-07-25 14:03:44.601925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.950 [2024-07-25 14:03:44.601935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.950 [2024-07-25 14:03:44.608935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.950 [2024-07-25 14:03:44.608957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.950 [2024-07-25 14:03:44.608968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.950 [2024-07-25 14:03:44.616608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.950 [2024-07-25 14:03:44.616631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.950 [2024-07-25 14:03:44.616641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.950 [2024-07-25 14:03:44.627474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.950 [2024-07-25 14:03:44.627496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.950 [2024-07-25 14:03:44.627507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.950 [2024-07-25 14:03:44.639928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.950 [2024-07-25 14:03:44.639950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.950 [2024-07-25 14:03:44.639960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.950 [2024-07-25 14:03:44.653055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.950 [2024-07-25 14:03:44.653078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.950 [2024-07-25 14:03:44.653088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.950 [2024-07-25 14:03:44.663679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.950 [2024-07-25 14:03:44.663701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.950 [2024-07-25 14:03:44.663712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.950 [2024-07-25 14:03:44.673522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.950 [2024-07-25 14:03:44.673545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.950 [2024-07-25 14:03:44.673555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.950 [2024-07-25 14:03:44.681115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.950 [2024-07-25 14:03:44.681137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.950 [2024-07-25 14:03:44.681147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:47.950 [2024-07-25 14:03:44.694465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.950 [2024-07-25 14:03:44.694487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.950 [2024-07-25 14:03:44.694497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:47.950 [2024-07-25 14:03:44.704929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156c970) 00:35:47.950 [2024-07-25 14:03:44.704952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.950 [2024-07-25 14:03:44.704962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:47.950 00:35:47.950 Latency(us) 00:35:47.950 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:47.950 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:47.950 nvme0n1 : 2.00 3966.73 495.84 0.00 0.00 4030.99 904.40 14260.63 00:35:47.950 =================================================================================================================== 00:35:47.950 Total : 3966.73 495.84 0.00 0.00 4030.99 904.40 14260.63 00:35:47.950 0 00:35:47.950 14:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:47.950 14:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:47.950 14:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:47.950 | .driver_specific 00:35:47.950 | .nvme_error 00:35:47.950 | .status_code 00:35:47.950 | .command_transient_transport_error' 00:35:47.950 14:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:48.209 14:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 255 > 0 )) 00:35:48.209 14:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 503906 00:35:48.209 14:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 503906 ']' 00:35:48.209 14:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 503906 00:35:48.209 14:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:48.209 14:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:48.209 14:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 503906 00:35:48.209 14:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:48.209 14:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:48.209 14:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 503906' 00:35:48.209 killing process with pid 503906 00:35:48.209 14:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 503906 00:35:48.209 Received shutdown signal, test time was about 2.000000 seconds 00:35:48.209 00:35:48.209 Latency(us) 00:35:48.209 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:48.209 =================================================================================================================== 00:35:48.209 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:48.209 14:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 503906 00:35:48.468 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:48.468 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:48.468 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:48.468 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:48.468 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:48.468 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=504455 00:35:48.468 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 504455 /var/tmp/bperf.sock 00:35:48.468 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 504455 ']' 00:35:48.468 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:48.468 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:48.468 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:48.468 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:48.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:48.468 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:48.468 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:48.468 [2024-07-25 14:03:45.166546] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:35:48.468 [2024-07-25 14:03:45.166601] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid504455 ] 00:35:48.468 EAL: No free 2048 kB hugepages reported on node 1 00:35:48.468 [2024-07-25 14:03:45.201913] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:48.468 [2024-07-25 14:03:45.237705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:48.468 [2024-07-25 14:03:45.276362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:48.468 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:48.468 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:48.468 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:48.468 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:48.727 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:48.727 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.727 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:48.727 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.727 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:48.727 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:49.295 nvme0n1 00:35:49.295 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:49.295 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.295 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:49.295 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.295 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:49.295 14:03:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:49.295 Running I/O for 2 seconds... 00:35:49.295 [2024-07-25 14:03:46.051707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.295 [2024-07-25 14:03:46.051955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.295 [2024-07-25 14:03:46.051984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.295 [2024-07-25 14:03:46.060883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.295 [2024-07-25 14:03:46.061105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.295 [2024-07-25 14:03:46.061129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.295 [2024-07-25 14:03:46.069982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.295 [2024-07-25 14:03:46.070187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.295 [2024-07-25 14:03:46.070211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.295 [2024-07-25 14:03:46.079090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.295 [2024-07-25 14:03:46.079292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.295 [2024-07-25 14:03:46.079312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.295 [2024-07-25 14:03:46.088216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.295 [2024-07-25 14:03:46.088427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.295 [2024-07-25 14:03:46.088448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.295 [2024-07-25 14:03:46.097368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.295 [2024-07-25 14:03:46.097587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.295 [2024-07-25 14:03:46.097607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.295 [2024-07-25 14:03:46.106465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.295 [2024-07-25 14:03:46.106674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.295 [2024-07-25 14:03:46.106694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.295 [2024-07-25 14:03:46.115708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.295 [2024-07-25 14:03:46.115941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.295 [2024-07-25 14:03:46.115963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.295 [2024-07-25 14:03:46.125062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.295 [2024-07-25 14:03:46.125283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.295 [2024-07-25 14:03:46.125304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.295 [2024-07-25 14:03:46.134433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.295 [2024-07-25 14:03:46.134652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.296 [2024-07-25 14:03:46.134673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.296 [2024-07-25 14:03:46.143775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.296 [2024-07-25 14:03:46.143988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.296 [2024-07-25 14:03:46.144008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.296 [2024-07-25 14:03:46.152862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.296 [2024-07-25 14:03:46.153075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.296 [2024-07-25 14:03:46.153104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.296 [2024-07-25 14:03:46.161958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.296 [2024-07-25 14:03:46.162166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.296 [2024-07-25 14:03:46.162187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.296 [2024-07-25 14:03:46.171030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.296 [2024-07-25 14:03:46.171245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.296 [2024-07-25 14:03:46.171265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.296 [2024-07-25 14:03:46.180232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.296 [2024-07-25 14:03:46.180442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.296 [2024-07-25 14:03:46.180464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.558 [2024-07-25 14:03:46.189576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.558 [2024-07-25 14:03:46.189790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.558 [2024-07-25 14:03:46.189810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.558 [2024-07-25 14:03:46.198692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.558 [2024-07-25 14:03:46.198907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.558 [2024-07-25 14:03:46.198928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.558 [2024-07-25 14:03:46.207844] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.558 [2024-07-25 14:03:46.208053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.558 [2024-07-25 14:03:46.208074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.558 [2024-07-25 14:03:46.216929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.558 [2024-07-25 14:03:46.217141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.558 [2024-07-25 14:03:46.217160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.558 [2024-07-25 14:03:46.226016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.558 [2024-07-25 14:03:46.226223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.558 [2024-07-25 14:03:46.226244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.558 [2024-07-25 14:03:46.235082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.558 [2024-07-25 14:03:46.235291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.558 [2024-07-25 14:03:46.235311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.558 [2024-07-25 14:03:46.244149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.558 [2024-07-25 14:03:46.244358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.558 [2024-07-25 14:03:46.244378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.558 [2024-07-25 14:03:46.253227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.558 [2024-07-25 14:03:46.253433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.558 [2024-07-25 14:03:46.253453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.558 [2024-07-25 14:03:46.262304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.558 [2024-07-25 14:03:46.262514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.558 [2024-07-25 14:03:46.262534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.558 [2024-07-25 14:03:46.271391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.559 [2024-07-25 14:03:46.271599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.559 [2024-07-25 14:03:46.271619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.559 [2024-07-25 14:03:46.280467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.559 [2024-07-25 14:03:46.280668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.559 [2024-07-25 14:03:46.280687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.559 [2024-07-25 14:03:46.289522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.559 [2024-07-25 14:03:46.289727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.559 [2024-07-25 14:03:46.289747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.559 [2024-07-25 14:03:46.298598] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.559 [2024-07-25 14:03:46.298804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.559 [2024-07-25 14:03:46.298823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.559 [2024-07-25 14:03:46.307669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.559 [2024-07-25 14:03:46.307899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.559 [2024-07-25 14:03:46.307921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.559 [2024-07-25 14:03:46.317051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.559 [2024-07-25 14:03:46.317261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.559 [2024-07-25 14:03:46.317280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.559 [2024-07-25 14:03:46.326153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.559 [2024-07-25 14:03:46.326364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.559 [2024-07-25 14:03:46.326385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.559 [2024-07-25 14:03:46.335269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.559 [2024-07-25 14:03:46.335477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.559 [2024-07-25 14:03:46.335498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.559 [2024-07-25 14:03:46.344336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.559 [2024-07-25 14:03:46.344547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.559 [2024-07-25 14:03:46.344568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.559 [2024-07-25 14:03:46.353404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.559 [2024-07-25 14:03:46.353614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.559 [2024-07-25 14:03:46.353633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.559 [2024-07-25 14:03:46.362480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.559 [2024-07-25 14:03:46.362691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.559 [2024-07-25 14:03:46.362711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.559 [2024-07-25 14:03:46.371563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.559 [2024-07-25 14:03:46.371772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.559 [2024-07-25 14:03:46.371790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.559 [2024-07-25 14:03:46.380650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.559 [2024-07-25 14:03:46.380867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.559 [2024-07-25 14:03:46.380888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.559 [2024-07-25 14:03:46.389675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.559 [2024-07-25 14:03:46.389895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.559 [2024-07-25 14:03:46.389916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.559 [2024-07-25 14:03:46.398744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.559 [2024-07-25 14:03:46.398961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.559 [2024-07-25 14:03:46.398981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.559 [2024-07-25 14:03:46.407809] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.559 [2024-07-25 14:03:46.408018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.559 [2024-07-25 14:03:46.408037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.559 [2024-07-25 14:03:46.416862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.559 [2024-07-25 14:03:46.417071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.559 [2024-07-25 14:03:46.417091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.559 [2024-07-25 14:03:46.425929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.559 [2024-07-25 14:03:46.426136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.559 [2024-07-25 14:03:46.426155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.559 [2024-07-25 14:03:46.434966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.559 [2024-07-25 14:03:46.435180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.559 [2024-07-25 14:03:46.435201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.559 [2024-07-25 14:03:46.444177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.559 [2024-07-25 14:03:46.444386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.559 [2024-07-25 14:03:46.444415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.820 [2024-07-25 14:03:46.453469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.820 [2024-07-25 14:03:46.453682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.820 [2024-07-25 14:03:46.453702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.820 [2024-07-25 14:03:46.462531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.820 [2024-07-25 14:03:46.462748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.820 [2024-07-25 14:03:46.462767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.820 [2024-07-25 14:03:46.471624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.820 [2024-07-25 14:03:46.471866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.820 [2024-07-25 14:03:46.471886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.820 [2024-07-25 14:03:46.480756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.820 [2024-07-25 14:03:46.480964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.820 [2024-07-25 14:03:46.480985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.820 [2024-07-25 14:03:46.489846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.820 [2024-07-25 14:03:46.490071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.820 [2024-07-25 14:03:46.490091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.820 [2024-07-25 14:03:46.498921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.820 [2024-07-25 14:03:46.499138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.820 [2024-07-25 14:03:46.499157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.820 [2024-07-25 14:03:46.508100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.820 [2024-07-25 14:03:46.508306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.820 [2024-07-25 14:03:46.508326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.820 [2024-07-25 14:03:46.517218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.820 [2024-07-25 14:03:46.517443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.820 [2024-07-25 14:03:46.517462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.820 [2024-07-25 14:03:46.526296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.820 [2024-07-25 14:03:46.526493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.820 [2024-07-25 14:03:46.526512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.821 [2024-07-25 14:03:46.535405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.821 [2024-07-25 14:03:46.535608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.821 [2024-07-25 14:03:46.535628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.821 [2024-07-25 14:03:46.544771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.821 [2024-07-25 14:03:46.545001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.821 [2024-07-25 14:03:46.545025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.821 [2024-07-25 14:03:46.553922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.821 [2024-07-25 14:03:46.554151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.821 [2024-07-25 14:03:46.554172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.821 [2024-07-25 14:03:46.563111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.821 [2024-07-25 14:03:46.563325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.821 [2024-07-25 14:03:46.563345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.821 [2024-07-25 14:03:46.572468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.821 [2024-07-25 14:03:46.572677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.821 [2024-07-25 14:03:46.572697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.821 [2024-07-25 14:03:46.581606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.821 [2024-07-25 14:03:46.581818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.821 [2024-07-25 14:03:46.581838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.821 [2024-07-25 14:03:46.590757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.821 [2024-07-25 14:03:46.590965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.821 [2024-07-25 14:03:46.590985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.821 [2024-07-25 14:03:46.599897] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.821 [2024-07-25 14:03:46.600115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.821 [2024-07-25 14:03:46.600136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.821 [2024-07-25 14:03:46.609053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.821 [2024-07-25 14:03:46.609261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.821 [2024-07-25 14:03:46.609280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.821 [2024-07-25 14:03:46.618189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.821 [2024-07-25 14:03:46.618385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.821 [2024-07-25 14:03:46.618404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.821 [2024-07-25 14:03:46.627315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.821 [2024-07-25 14:03:46.627551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.821 [2024-07-25 14:03:46.627571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.821 [2024-07-25 14:03:46.636456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.821 [2024-07-25 14:03:46.636666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.821 [2024-07-25 14:03:46.636685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.821 [2024-07-25 14:03:46.645584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.821 [2024-07-25 14:03:46.645789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.821 [2024-07-25 14:03:46.645808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.821 [2024-07-25 14:03:46.654757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.821 [2024-07-25 14:03:46.654986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.821 [2024-07-25 14:03:46.655007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.821 [2024-07-25 14:03:46.663873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.821 [2024-07-25 14:03:46.664072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.821 [2024-07-25 14:03:46.664091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.821 [2024-07-25 14:03:46.673006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.821 [2024-07-25 14:03:46.673235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.821 [2024-07-25 14:03:46.673256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.821 [2024-07-25 14:03:46.682134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.821 [2024-07-25 14:03:46.682331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.821 [2024-07-25 14:03:46.682351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.821 [2024-07-25 14:03:46.691232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.821 [2024-07-25 14:03:46.691463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.821 [2024-07-25 14:03:46.691483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:49.821 [2024-07-25 14:03:46.700339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:49.821 [2024-07-25 14:03:46.700548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.821 [2024-07-25 14:03:46.700568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.082 [2024-07-25 14:03:46.709765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.082 [2024-07-25 14:03:46.710017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.082 [2024-07-25 14:03:46.710038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.082 [2024-07-25 14:03:46.719040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.082 [2024-07-25 14:03:46.719263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.082 [2024-07-25 14:03:46.719283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.082 [2024-07-25 14:03:46.728154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.082 [2024-07-25 14:03:46.728367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.082 [2024-07-25 14:03:46.728387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.082 [2024-07-25 14:03:46.737275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.082 [2024-07-25 14:03:46.737480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.082 [2024-07-25 14:03:46.737499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.082 [2024-07-25 14:03:46.746416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.082 [2024-07-25 14:03:46.746641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.082 [2024-07-25 14:03:46.746661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.082 [2024-07-25 14:03:46.755521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.082 [2024-07-25 14:03:46.755736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.082 [2024-07-25 14:03:46.755755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.082 [2024-07-25 14:03:46.764679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.082 [2024-07-25 14:03:46.764899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.082 [2024-07-25 14:03:46.764918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.082 [2024-07-25 14:03:46.773833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.082 [2024-07-25 14:03:46.774043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.082 [2024-07-25 14:03:46.774064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.082 [2024-07-25 14:03:46.782980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.082 [2024-07-25 14:03:46.783189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.082 [2024-07-25 14:03:46.783208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.082 [2024-07-25 14:03:46.792115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.082 [2024-07-25 14:03:46.792323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.082 [2024-07-25 14:03:46.792344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.082 [2024-07-25 14:03:46.801164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.082 [2024-07-25 14:03:46.801391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.082 [2024-07-25 14:03:46.801412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.082 [2024-07-25 14:03:46.810282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.082 [2024-07-25 14:03:46.810492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.083 [2024-07-25 14:03:46.810513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.083 [2024-07-25 14:03:46.819417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.083 [2024-07-25 14:03:46.819620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.083 [2024-07-25 14:03:46.819640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.083 [2024-07-25 14:03:46.828784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.083 [2024-07-25 14:03:46.829021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.083 [2024-07-25 14:03:46.829042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.083 [2024-07-25 14:03:46.837917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.083 [2024-07-25 14:03:46.838130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.083 [2024-07-25 14:03:46.838152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.083 [2024-07-25 14:03:46.847073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.083 [2024-07-25 14:03:46.847274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.083 [2024-07-25 14:03:46.847293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.083 [2024-07-25 14:03:46.856351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.083 [2024-07-25 14:03:46.856558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.083 [2024-07-25 14:03:46.856579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.083 [2024-07-25 14:03:46.865745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.083 [2024-07-25 14:03:46.865953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.083 [2024-07-25 14:03:46.865977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.083 [2024-07-25 14:03:46.875113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.083 [2024-07-25 14:03:46.875323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.083 [2024-07-25 14:03:46.875344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.083 [2024-07-25 14:03:46.884240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.083 [2024-07-25 14:03:46.884466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.083 [2024-07-25 14:03:46.884486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.083 [2024-07-25 14:03:46.893383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.083 [2024-07-25 14:03:46.893593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.083 [2024-07-25 14:03:46.893613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.083 [2024-07-25 14:03:46.902687] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.083 [2024-07-25 14:03:46.902903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.083 [2024-07-25 14:03:46.902924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.083 [2024-07-25 14:03:46.911804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.083 [2024-07-25 14:03:46.912013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.083 [2024-07-25 14:03:46.912032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.083 [2024-07-25 14:03:46.920983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.083 [2024-07-25 14:03:46.921195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.083 [2024-07-25 14:03:46.921216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.083 [2024-07-25 14:03:46.930080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.083 [2024-07-25 14:03:46.930291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.083 [2024-07-25 14:03:46.930310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.083 [2024-07-25 14:03:46.939199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.083 [2024-07-25 14:03:46.939406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.083 [2024-07-25 14:03:46.939435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.083 [2024-07-25 14:03:46.948340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.083 [2024-07-25 14:03:46.948556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.083 [2024-07-25 14:03:46.948575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.083 [2024-07-25 14:03:46.957493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.083 [2024-07-25 14:03:46.957703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.083 [2024-07-25 14:03:46.957727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.083 [2024-07-25 14:03:46.966843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.083 [2024-07-25 14:03:46.967057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.083 [2024-07-25 14:03:46.967078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.344 [2024-07-25 14:03:46.976295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.344 [2024-07-25 14:03:46.976507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.344 [2024-07-25 14:03:46.976527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.344 [2024-07-25 14:03:46.985488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.344 [2024-07-25 14:03:46.985692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.344 [2024-07-25 14:03:46.985712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.344 [2024-07-25 14:03:46.994634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.344 [2024-07-25 14:03:46.994843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.344 [2024-07-25 14:03:46.994862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.344 [2024-07-25 14:03:47.003806] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.344 [2024-07-25 14:03:47.004016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.344 [2024-07-25 14:03:47.004036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.344 [2024-07-25 14:03:47.012952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.344 [2024-07-25 14:03:47.013161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.344 [2024-07-25 14:03:47.013180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.344 [2024-07-25 14:03:47.022115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.344 [2024-07-25 14:03:47.022325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.344 [2024-07-25 14:03:47.022345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.344 [2024-07-25 14:03:47.031221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.344 [2024-07-25 14:03:47.031431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.344 [2024-07-25 14:03:47.031451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.344 [2024-07-25 14:03:47.040354] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.344 [2024-07-25 14:03:47.040564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.344 [2024-07-25 14:03:47.040585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.344 [2024-07-25 14:03:47.049489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.344 [2024-07-25 14:03:47.049699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.344 [2024-07-25 14:03:47.049724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.344 [2024-07-25 14:03:47.058637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.344 [2024-07-25 14:03:47.058834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.344 [2024-07-25 14:03:47.058853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.344 [2024-07-25 14:03:47.067767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.344 [2024-07-25 14:03:47.067972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.344 [2024-07-25 14:03:47.067992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.344 [2024-07-25 14:03:47.076906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.344 [2024-07-25 14:03:47.077118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.344 [2024-07-25 14:03:47.077137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.344 [2024-07-25 14:03:47.086288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.344 [2024-07-25 14:03:47.086496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.344 [2024-07-25 14:03:47.086515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.344 [2024-07-25 14:03:47.095447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.344 [2024-07-25 14:03:47.095657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.344 [2024-07-25 14:03:47.095678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.344 [2024-07-25 14:03:47.104601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.344 [2024-07-25 14:03:47.104815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.344 [2024-07-25 14:03:47.104846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.344 [2024-07-25 14:03:47.113775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.344 [2024-07-25 14:03:47.113988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.344 [2024-07-25 14:03:47.114009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.344 [2024-07-25 14:03:47.122917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.344 [2024-07-25 14:03:47.123129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.344 [2024-07-25 14:03:47.123149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.344 [2024-07-25 14:03:47.132068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.344 [2024-07-25 14:03:47.132277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.344 [2024-07-25 14:03:47.132298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.344 [2024-07-25 14:03:47.141213] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.344 [2024-07-25 14:03:47.141424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.344 [2024-07-25 14:03:47.141444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.344 [2024-07-25 14:03:47.150377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.344 [2024-07-25 14:03:47.150589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.344 [2024-07-25 14:03:47.150610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.344 [2024-07-25 14:03:47.159532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.344 [2024-07-25 14:03:47.159741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.345 [2024-07-25 14:03:47.159760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.345 [2024-07-25 14:03:47.168677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.345 [2024-07-25 14:03:47.168896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.345 [2024-07-25 14:03:47.168917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.345 [2024-07-25 14:03:47.177817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.345 [2024-07-25 14:03:47.178026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.345 [2024-07-25 14:03:47.178046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.345 [2024-07-25 14:03:47.186963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.345 [2024-07-25 14:03:47.187178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.345 [2024-07-25 14:03:47.187199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.345 [2024-07-25 14:03:47.196089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.345 [2024-07-25 14:03:47.196298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.345 [2024-07-25 14:03:47.196318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.345 [2024-07-25 14:03:47.205243] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.345 [2024-07-25 14:03:47.205453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.345 [2024-07-25 14:03:47.205474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.345 [2024-07-25 14:03:47.214490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.345 [2024-07-25 14:03:47.214703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.345 [2024-07-25 14:03:47.214728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.345 [2024-07-25 14:03:47.223642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.345 [2024-07-25 14:03:47.223850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.345 [2024-07-25 14:03:47.223869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.233019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.233231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.233251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.242270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.242482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.242501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.251423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.251641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.251660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.260562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.260785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.260804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.269710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.269936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.269957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.278851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.279058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.279079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.287999] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.288209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.288229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.297129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.297339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.297359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.306301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.306512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.306531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.315431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.315640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.315659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.324596] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.324808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.324828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.333707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.333923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.333942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.343070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.343280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.343302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.352277] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.352486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.352505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.361388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.361600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.361621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.370578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.370788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.370807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.379699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.379918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.379938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.388864] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.389076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.389095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.398024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.398240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.398261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.407151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.407372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.407392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.416302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.416495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.416513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.425434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.425653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.425673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.434572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.434776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.434796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.443727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.443937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.443956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.452880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.453093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.453113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.462020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.462236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.462257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.471146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.471363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.471384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.480277] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.480488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.480508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.607 [2024-07-25 14:03:47.489435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.607 [2024-07-25 14:03:47.489646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.607 [2024-07-25 14:03:47.489666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.498826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.499035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.499055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.508113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.508325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.508345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.517313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.517524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.517542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.526441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.526651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.526672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.535578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.535788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.535808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.544698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.544919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.544938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.553815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.554025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.554044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.562908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.563117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.563137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.572009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.572217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.572237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.581007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.581222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.581244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.590092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.590298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.590317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.599432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.599645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.599674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.608547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.608758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.608777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.617633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.617849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.617868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.626732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.626939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.626967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.635805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.636015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.636044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.644915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.645123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.645143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.653994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.654208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.654228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.663062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.663273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.663292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.672130] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.672337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.672358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.681191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.681396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.681416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.690270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.690476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.690505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.699342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.699550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.699569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.708417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.708627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.708647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.717477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.717688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.717708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.726516] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.726728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.726747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.735625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.735841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.735860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:50.866 [2024-07-25 14:03:47.744695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:50.866 [2024-07-25 14:03:47.744910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.866 [2024-07-25 14:03:47.744930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.127 [2024-07-25 14:03:47.753925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.127 [2024-07-25 14:03:47.754136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.127 [2024-07-25 14:03:47.754165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.127 [2024-07-25 14:03:47.763148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.127 [2024-07-25 14:03:47.763356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.127 [2024-07-25 14:03:47.763383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.127 [2024-07-25 14:03:47.772221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.127 [2024-07-25 14:03:47.772429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.127 [2024-07-25 14:03:47.772449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.127 [2024-07-25 14:03:47.781293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.127 [2024-07-25 14:03:47.781515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.127 [2024-07-25 14:03:47.781535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.127 [2024-07-25 14:03:47.790369] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.127 [2024-07-25 14:03:47.790585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.127 [2024-07-25 14:03:47.790606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.127 [2024-07-25 14:03:47.799448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.127 [2024-07-25 14:03:47.799661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.127 [2024-07-25 14:03:47.799681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.127 [2024-07-25 14:03:47.808507] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.127 [2024-07-25 14:03:47.808728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.127 [2024-07-25 14:03:47.808755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.127 [2024-07-25 14:03:47.817591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.127 [2024-07-25 14:03:47.817800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.127 [2024-07-25 14:03:47.817819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.127 [2024-07-25 14:03:47.826654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.127 [2024-07-25 14:03:47.826871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.127 [2024-07-25 14:03:47.826891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.127 [2024-07-25 14:03:47.835744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.127 [2024-07-25 14:03:47.835953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.127 [2024-07-25 14:03:47.835973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.127 [2024-07-25 14:03:47.844849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.127 [2024-07-25 14:03:47.845061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.127 [2024-07-25 14:03:47.845080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.127 [2024-07-25 14:03:47.854181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.127 [2024-07-25 14:03:47.854390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.127 [2024-07-25 14:03:47.854409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.127 [2024-07-25 14:03:47.863337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.128 [2024-07-25 14:03:47.863545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.128 [2024-07-25 14:03:47.863564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.128 [2024-07-25 14:03:47.872431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.128 [2024-07-25 14:03:47.872641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.128 [2024-07-25 14:03:47.872661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.128 [2024-07-25 14:03:47.881509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.128 [2024-07-25 14:03:47.881723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.128 [2024-07-25 14:03:47.881743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.128 [2024-07-25 14:03:47.890594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.128 [2024-07-25 14:03:47.890805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.128 [2024-07-25 14:03:47.890824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.128 [2024-07-25 14:03:47.899731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.128 [2024-07-25 14:03:47.899939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.128 [2024-07-25 14:03:47.899962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.128 [2024-07-25 14:03:47.908808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.128 [2024-07-25 14:03:47.909017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.128 [2024-07-25 14:03:47.909047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.128 [2024-07-25 14:03:47.917894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.128 [2024-07-25 14:03:47.918103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.128 [2024-07-25 14:03:47.918123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.128 [2024-07-25 14:03:47.926962] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.128 [2024-07-25 14:03:47.927170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.128 [2024-07-25 14:03:47.927191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.128 [2024-07-25 14:03:47.936028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.128 [2024-07-25 14:03:47.936235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.128 [2024-07-25 14:03:47.936264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.128 [2024-07-25 14:03:47.945097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.128 [2024-07-25 14:03:47.945306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.128 [2024-07-25 14:03:47.945334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.128 [2024-07-25 14:03:47.954165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.128 [2024-07-25 14:03:47.954374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.128 [2024-07-25 14:03:47.954394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.128 [2024-07-25 14:03:47.963228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.128 [2024-07-25 14:03:47.963438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.128 [2024-07-25 14:03:47.963459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.128 [2024-07-25 14:03:47.972286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.128 [2024-07-25 14:03:47.972494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.128 [2024-07-25 14:03:47.972515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.128 [2024-07-25 14:03:47.981342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.128 [2024-07-25 14:03:47.981552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.128 [2024-07-25 14:03:47.981571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.128 [2024-07-25 14:03:47.990416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.128 [2024-07-25 14:03:47.990623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.128 [2024-07-25 14:03:47.990643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.128 [2024-07-25 14:03:47.999477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.128 [2024-07-25 14:03:47.999684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.128 [2024-07-25 14:03:47.999703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.128 [2024-07-25 14:03:48.008519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.128 [2024-07-25 14:03:48.008745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.128 [2024-07-25 14:03:48.008766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.388 [2024-07-25 14:03:48.017933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.388 [2024-07-25 14:03:48.018143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.388 [2024-07-25 14:03:48.018171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.388 [2024-07-25 14:03:48.027069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.388 [2024-07-25 14:03:48.027277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.388 [2024-07-25 14:03:48.027297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.388 [2024-07-25 14:03:48.036123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104dd10) with pdu=0x2000190fe720 00:35:51.388 [2024-07-25 14:03:48.036333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.388 [2024-07-25 14:03:48.036360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.388 00:35:51.388 Latency(us) 00:35:51.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:51.388 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:51.388 nvme0n1 : 2.00 27805.03 108.61 0.00 0.00 4595.79 4063.23 15309.21 00:35:51.388 =================================================================================================================== 00:35:51.388 Total : 27805.03 108.61 0.00 0.00 4595.79 4063.23 15309.21 00:35:51.388 0 00:35:51.388 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:51.388 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:51.388 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:51.388 | .driver_specific 00:35:51.388 | .nvme_error 00:35:51.388 | .status_code 00:35:51.388 | .command_transient_transport_error' 00:35:51.388 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:51.388 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 218 > 0 )) 00:35:51.388 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 504455 00:35:51.388 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 504455 ']' 00:35:51.388 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 504455 00:35:51.388 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:51.388 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:51.388 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 504455 00:35:51.648 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:51.648 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:51.648 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 504455' 00:35:51.648 killing process with pid 504455 00:35:51.648 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 504455 00:35:51.648 Received shutdown signal, test time was about 2.000000 seconds 00:35:51.648 00:35:51.648 Latency(us) 00:35:51.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:51.649 =================================================================================================================== 00:35:51.649 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:51.649 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 504455 00:35:51.649 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:51.649 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:51.649 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:51.649 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:51.649 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:51.649 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=504989 00:35:51.649 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 504989 /var/tmp/bperf.sock 00:35:51.649 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 504989 ']' 00:35:51.649 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:51.649 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:51.649 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:51.649 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:51.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:51.649 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:51.649 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:51.649 [2024-07-25 14:03:48.483229] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:35:51.649 [2024-07-25 14:03:48.483286] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid504989 ] 00:35:51.649 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:51.649 Zero copy mechanism will not be used. 00:35:51.649 EAL: No free 2048 kB hugepages reported on node 1 00:35:51.649 [2024-07-25 14:03:48.519874] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:51.909 [2024-07-25 14:03:48.555190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:51.909 [2024-07-25 14:03:48.594036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:51.909 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:51.909 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:51.909 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:51.909 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:52.168 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:52.168 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.168 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:52.168 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.168 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:52.168 14:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:52.428 nvme0n1 00:35:52.428 14:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:52.428 14:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.428 14:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:52.428 14:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.428 14:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:52.428 14:03:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:52.428 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:52.428 Zero copy mechanism will not be used. 00:35:52.428 Running I/O for 2 seconds... 00:35:52.428 [2024-07-25 14:03:49.230489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.428 [2024-07-25 14:03:49.230734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.428 [2024-07-25 14:03:49.230763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:52.428 [2024-07-25 14:03:49.240631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.428 [2024-07-25 14:03:49.240723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.428 [2024-07-25 14:03:49.240746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:52.428 [2024-07-25 14:03:49.248675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.428 [2024-07-25 14:03:49.249020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.428 [2024-07-25 14:03:49.249043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:52.428 [2024-07-25 14:03:49.256549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.428 [2024-07-25 14:03:49.256926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.428 [2024-07-25 14:03:49.256948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.428 [2024-07-25 14:03:49.263375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.428 [2024-07-25 14:03:49.263720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.428 [2024-07-25 14:03:49.263741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:52.428 [2024-07-25 14:03:49.270618] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.428 [2024-07-25 14:03:49.270976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.428 [2024-07-25 14:03:49.270997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:52.428 [2024-07-25 14:03:49.278276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.428 [2024-07-25 14:03:49.278629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.428 [2024-07-25 14:03:49.278650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:52.428 [2024-07-25 14:03:49.285482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.428 [2024-07-25 14:03:49.285909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.428 [2024-07-25 14:03:49.285930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.428 [2024-07-25 14:03:49.293115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.428 [2024-07-25 14:03:49.293460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.428 [2024-07-25 14:03:49.293481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:52.428 [2024-07-25 14:03:49.301993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.428 [2024-07-25 14:03:49.302357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.428 [2024-07-25 14:03:49.302378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:52.428 [2024-07-25 14:03:49.309440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.428 [2024-07-25 14:03:49.309793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.429 [2024-07-25 14:03:49.309814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:52.689 [2024-07-25 14:03:49.326280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.689 [2024-07-25 14:03:49.326691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.689 [2024-07-25 14:03:49.326712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.689 [2024-07-25 14:03:49.338259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.689 [2024-07-25 14:03:49.338635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.689 [2024-07-25 14:03:49.338656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:52.689 [2024-07-25 14:03:49.346695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.689 [2024-07-25 14:03:49.347045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.689 [2024-07-25 14:03:49.347066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:52.689 [2024-07-25 14:03:49.353986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.689 [2024-07-25 14:03:49.354353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.689 [2024-07-25 14:03:49.354374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:52.689 [2024-07-25 14:03:49.362342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.689 [2024-07-25 14:03:49.362705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.689 [2024-07-25 14:03:49.362730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.689 [2024-07-25 14:03:49.370511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.689 [2024-07-25 14:03:49.370867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.689 [2024-07-25 14:03:49.370889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:52.689 [2024-07-25 14:03:49.388190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.689 [2024-07-25 14:03:49.388675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.689 [2024-07-25 14:03:49.388695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:52.689 [2024-07-25 14:03:49.399566] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.689 [2024-07-25 14:03:49.399948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.689 [2024-07-25 14:03:49.399974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:52.689 [2024-07-25 14:03:49.407850] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.689 [2024-07-25 14:03:49.408232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.689 [2024-07-25 14:03:49.408252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.689 [2024-07-25 14:03:49.418568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.689 [2024-07-25 14:03:49.419235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.689 [2024-07-25 14:03:49.419256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:52.689 [2024-07-25 14:03:49.433396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.689 [2024-07-25 14:03:49.433756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.689 [2024-07-25 14:03:49.433776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:52.689 [2024-07-25 14:03:49.441817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.689 [2024-07-25 14:03:49.442189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.689 [2024-07-25 14:03:49.442209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:52.689 [2024-07-25 14:03:49.449892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.689 [2024-07-25 14:03:49.450238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.689 [2024-07-25 14:03:49.450258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.689 [2024-07-25 14:03:49.459588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.689 [2024-07-25 14:03:49.459980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.689 [2024-07-25 14:03:49.460000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:52.689 [2024-07-25 14:03:49.468671] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.689 [2024-07-25 14:03:49.469042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.689 [2024-07-25 14:03:49.469063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:52.689 [2024-07-25 14:03:49.476179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.689 [2024-07-25 14:03:49.476616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.689 [2024-07-25 14:03:49.476636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:52.689 [2024-07-25 14:03:49.485632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.689 [2024-07-25 14:03:49.485754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.689 [2024-07-25 14:03:49.485774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.689 [2024-07-25 14:03:49.494491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.689 [2024-07-25 14:03:49.494875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.689 [2024-07-25 14:03:49.494896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:52.689 [2024-07-25 14:03:49.501966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.689 [2024-07-25 14:03:49.502396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.690 [2024-07-25 14:03:49.502416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:52.690 [2024-07-25 14:03:49.508998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.690 [2024-07-25 14:03:49.509358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.690 [2024-07-25 14:03:49.509379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:52.690 [2024-07-25 14:03:49.516171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.690 [2024-07-25 14:03:49.516533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.690 [2024-07-25 14:03:49.516554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.690 [2024-07-25 14:03:49.524300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.690 [2024-07-25 14:03:49.524684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.690 [2024-07-25 14:03:49.524705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:52.690 [2024-07-25 14:03:49.531598] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.690 [2024-07-25 14:03:49.531768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.690 [2024-07-25 14:03:49.531787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:52.690 [2024-07-25 14:03:49.539708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.690 [2024-07-25 14:03:49.540101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.690 [2024-07-25 14:03:49.540121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:52.690 [2024-07-25 14:03:49.548998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.690 [2024-07-25 14:03:49.549390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.690 [2024-07-25 14:03:49.549410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.690 [2024-07-25 14:03:49.562312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.690 [2024-07-25 14:03:49.563145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.690 [2024-07-25 14:03:49.563165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:52.960 [2024-07-25 14:03:49.577517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.960 [2024-07-25 14:03:49.577905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.960 [2024-07-25 14:03:49.577926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:52.960 [2024-07-25 14:03:49.586567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.960 [2024-07-25 14:03:49.586735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.960 [2024-07-25 14:03:49.586754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:52.960 [2024-07-25 14:03:49.594883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.960 [2024-07-25 14:03:49.595246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.960 [2024-07-25 14:03:49.595266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.960 [2024-07-25 14:03:49.604460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.960 [2024-07-25 14:03:49.604925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.960 [2024-07-25 14:03:49.604945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:52.960 [2024-07-25 14:03:49.623063] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.960 [2024-07-25 14:03:49.623619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.960 [2024-07-25 14:03:49.623639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:52.960 [2024-07-25 14:03:49.636740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.960 [2024-07-25 14:03:49.637098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.960 [2024-07-25 14:03:49.637118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:52.960 [2024-07-25 14:03:49.646179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.960 [2024-07-25 14:03:49.646307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.960 [2024-07-25 14:03:49.646326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.960 [2024-07-25 14:03:49.654818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.960 [2024-07-25 14:03:49.655248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.960 [2024-07-25 14:03:49.655272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:52.960 [2024-07-25 14:03:49.662776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.960 [2024-07-25 14:03:49.663149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.960 [2024-07-25 14:03:49.663170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:52.960 [2024-07-25 14:03:49.669531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.960 [2024-07-25 14:03:49.669923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.960 [2024-07-25 14:03:49.669944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:52.960 [2024-07-25 14:03:49.676290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.960 [2024-07-25 14:03:49.676643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.960 [2024-07-25 14:03:49.676664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.960 [2024-07-25 14:03:49.683729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.960 [2024-07-25 14:03:49.684090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.960 [2024-07-25 14:03:49.684111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:52.960 [2024-07-25 14:03:49.692003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.960 [2024-07-25 14:03:49.692357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.960 [2024-07-25 14:03:49.692378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:52.960 [2024-07-25 14:03:49.699878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.960 [2024-07-25 14:03:49.700243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.960 [2024-07-25 14:03:49.700263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:52.960 [2024-07-25 14:03:49.708075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.960 [2024-07-25 14:03:49.708452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.960 [2024-07-25 14:03:49.708473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.960 [2024-07-25 14:03:49.715750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.960 [2024-07-25 14:03:49.716187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.960 [2024-07-25 14:03:49.716208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:52.960 [2024-07-25 14:03:49.723912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.961 [2024-07-25 14:03:49.724346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.961 [2024-07-25 14:03:49.724367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:52.961 [2024-07-25 14:03:49.731555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.961 [2024-07-25 14:03:49.731943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.961 [2024-07-25 14:03:49.731963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:52.961 [2024-07-25 14:03:49.740175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.961 [2024-07-25 14:03:49.740579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.961 [2024-07-25 14:03:49.740600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.961 [2024-07-25 14:03:49.748266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.961 [2024-07-25 14:03:49.748631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.961 [2024-07-25 14:03:49.748652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:52.961 [2024-07-25 14:03:49.756807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.961 [2024-07-25 14:03:49.757151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.961 [2024-07-25 14:03:49.757171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:52.961 [2024-07-25 14:03:49.764632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.961 [2024-07-25 14:03:49.765010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.961 [2024-07-25 14:03:49.765030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:52.961 [2024-07-25 14:03:49.781625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.961 [2024-07-25 14:03:49.782021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.961 [2024-07-25 14:03:49.782042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.961 [2024-07-25 14:03:49.792743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.961 [2024-07-25 14:03:49.793119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.961 [2024-07-25 14:03:49.793139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:52.961 [2024-07-25 14:03:49.801080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.961 [2024-07-25 14:03:49.801459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.961 [2024-07-25 14:03:49.801483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:52.961 [2024-07-25 14:03:49.809559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.961 [2024-07-25 14:03:49.809937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.961 [2024-07-25 14:03:49.809958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:52.961 [2024-07-25 14:03:49.818090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.961 [2024-07-25 14:03:49.818478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.961 [2024-07-25 14:03:49.818499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.961 [2024-07-25 14:03:49.828395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.961 [2024-07-25 14:03:49.828562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.961 [2024-07-25 14:03:49.828581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:52.961 [2024-07-25 14:03:49.837775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:52.961 [2024-07-25 14:03:49.838139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.961 [2024-07-25 14:03:49.838159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:53.221 [2024-07-25 14:03:49.847839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.221 [2024-07-25 14:03:49.848287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.221 [2024-07-25 14:03:49.848308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:53.221 [2024-07-25 14:03:49.856841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.221 [2024-07-25 14:03:49.857288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.221 [2024-07-25 14:03:49.857309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.221 [2024-07-25 14:03:49.866052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.221 [2024-07-25 14:03:49.866430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.221 [2024-07-25 14:03:49.866451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:53.221 [2024-07-25 14:03:49.875986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.221 [2024-07-25 14:03:49.876361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.221 [2024-07-25 14:03:49.876381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:53.221 [2024-07-25 14:03:49.884895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.221 [2024-07-25 14:03:49.885325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.221 [2024-07-25 14:03:49.885345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:53.221 [2024-07-25 14:03:49.894011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.221 [2024-07-25 14:03:49.894392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.221 [2024-07-25 14:03:49.894412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.221 [2024-07-25 14:03:49.910347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.222 [2024-07-25 14:03:49.910796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.222 [2024-07-25 14:03:49.910816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:53.222 [2024-07-25 14:03:49.922553] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.222 [2024-07-25 14:03:49.922952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.222 [2024-07-25 14:03:49.922973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:53.222 [2024-07-25 14:03:49.932296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.222 [2024-07-25 14:03:49.932669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.222 [2024-07-25 14:03:49.932689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:53.222 [2024-07-25 14:03:49.941001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.222 [2024-07-25 14:03:49.941365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.222 [2024-07-25 14:03:49.941385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.222 [2024-07-25 14:03:49.948988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.222 [2024-07-25 14:03:49.949340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.222 [2024-07-25 14:03:49.949360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:53.222 [2024-07-25 14:03:49.958070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.222 [2024-07-25 14:03:49.958422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.222 [2024-07-25 14:03:49.958442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:53.222 [2024-07-25 14:03:49.967490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.222 [2024-07-25 14:03:49.967833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.222 [2024-07-25 14:03:49.967854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:53.222 [2024-07-25 14:03:49.975916] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.222 [2024-07-25 14:03:49.976286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.222 [2024-07-25 14:03:49.976307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.222 [2024-07-25 14:03:49.984709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.222 [2024-07-25 14:03:49.985069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.222 [2024-07-25 14:03:49.985090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:53.222 [2024-07-25 14:03:49.992944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.222 [2024-07-25 14:03:49.993320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.222 [2024-07-25 14:03:49.993341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:53.222 [2024-07-25 14:03:50.002452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.222 [2024-07-25 14:03:50.002851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.222 [2024-07-25 14:03:50.002872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:53.222 [2024-07-25 14:03:50.011070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.222 [2024-07-25 14:03:50.011431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.222 [2024-07-25 14:03:50.011453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.222 [2024-07-25 14:03:50.018146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.222 [2024-07-25 14:03:50.018508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.222 [2024-07-25 14:03:50.018529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:53.222 [2024-07-25 14:03:50.025836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.222 [2024-07-25 14:03:50.026200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.222 [2024-07-25 14:03:50.026220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:53.222 [2024-07-25 14:03:50.033795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.222 [2024-07-25 14:03:50.034155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.222 [2024-07-25 14:03:50.034176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:53.222 [2024-07-25 14:03:50.041355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.222 [2024-07-25 14:03:50.041731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.222 [2024-07-25 14:03:50.041756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.222 [2024-07-25 14:03:50.048198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.222 [2024-07-25 14:03:50.048566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.222 [2024-07-25 14:03:50.048587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:53.222 [2024-07-25 14:03:50.055957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.222 [2024-07-25 14:03:50.056323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.222 [2024-07-25 14:03:50.056343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:53.222 [2024-07-25 14:03:50.063496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.222 [2024-07-25 14:03:50.063857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.222 [2024-07-25 14:03:50.063878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:53.222 [2024-07-25 14:03:50.070989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.222 [2024-07-25 14:03:50.071351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.222 [2024-07-25 14:03:50.071371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.222 [2024-07-25 14:03:50.078591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.222 [2024-07-25 14:03:50.078965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.222 [2024-07-25 14:03:50.078985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:53.222 [2024-07-25 14:03:50.086359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.222 [2024-07-25 14:03:50.086478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.222 [2024-07-25 14:03:50.086498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:53.222 [2024-07-25 14:03:50.095414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.222 [2024-07-25 14:03:50.095801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.222 [2024-07-25 14:03:50.095822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:53.222 [2024-07-25 14:03:50.104941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.222 [2024-07-25 14:03:50.105301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.222 [2024-07-25 14:03:50.105323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.485 [2024-07-25 14:03:50.113872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.485 [2024-07-25 14:03:50.114066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.485 [2024-07-25 14:03:50.114086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:53.485 [2024-07-25 14:03:50.123260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.485 [2024-07-25 14:03:50.123424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.485 [2024-07-25 14:03:50.123443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:53.485 [2024-07-25 14:03:50.131543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.485 [2024-07-25 14:03:50.131898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.485 [2024-07-25 14:03:50.131918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:53.485 [2024-07-25 14:03:50.139928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.485 [2024-07-25 14:03:50.140288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.485 [2024-07-25 14:03:50.140308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.485 [2024-07-25 14:03:50.149013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.485 [2024-07-25 14:03:50.149370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.485 [2024-07-25 14:03:50.149391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:53.485 [2024-07-25 14:03:50.158359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.485 [2024-07-25 14:03:50.158554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.485 [2024-07-25 14:03:50.158573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:53.485 [2024-07-25 14:03:50.166856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.485 [2024-07-25 14:03:50.167360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.485 [2024-07-25 14:03:50.167381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:53.485 [2024-07-25 14:03:50.175358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.485 [2024-07-25 14:03:50.175776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.485 [2024-07-25 14:03:50.175797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.485 [2024-07-25 14:03:50.184462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.485 [2024-07-25 14:03:50.184888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.485 [2024-07-25 14:03:50.184908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:53.485 [2024-07-25 14:03:50.193020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.485 [2024-07-25 14:03:50.193477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.485 [2024-07-25 14:03:50.193499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:53.485 [2024-07-25 14:03:50.201788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.485 [2024-07-25 14:03:50.202191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.485 [2024-07-25 14:03:50.202212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:53.485 [2024-07-25 14:03:50.209834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.485 [2024-07-25 14:03:50.210244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.485 [2024-07-25 14:03:50.210276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.485 [2024-07-25 14:03:50.218541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.485 [2024-07-25 14:03:50.218978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.485 [2024-07-25 14:03:50.218999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:53.485 [2024-07-25 14:03:50.226903] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.485 [2024-07-25 14:03:50.227360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.485 [2024-07-25 14:03:50.227381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:53.485 [2024-07-25 14:03:50.235266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.485 [2024-07-25 14:03:50.235735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.485 [2024-07-25 14:03:50.235754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:53.485 [2024-07-25 14:03:50.243967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.486 [2024-07-25 14:03:50.244442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.486 [2024-07-25 14:03:50.244463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.486 [2024-07-25 14:03:50.252293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.486 [2024-07-25 14:03:50.252770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.486 [2024-07-25 14:03:50.252791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:53.486 [2024-07-25 14:03:50.260768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.486 [2024-07-25 14:03:50.261215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.486 [2024-07-25 14:03:50.261243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:53.486 [2024-07-25 14:03:50.270028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.486 [2024-07-25 14:03:50.270506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.486 [2024-07-25 14:03:50.270526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:53.486 [2024-07-25 14:03:50.279205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.486 [2024-07-25 14:03:50.279567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.486 [2024-07-25 14:03:50.279588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.486 [2024-07-25 14:03:50.288113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.486 [2024-07-25 14:03:50.288595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.486 [2024-07-25 14:03:50.288615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:53.486 [2024-07-25 14:03:50.297159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.486 [2024-07-25 14:03:50.297538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.486 [2024-07-25 14:03:50.297559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:53.486 [2024-07-25 14:03:50.305910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.486 [2024-07-25 14:03:50.306379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.486 [2024-07-25 14:03:50.306400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:53.486 [2024-07-25 14:03:50.313567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.486 [2024-07-25 14:03:50.313942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.486 [2024-07-25 14:03:50.313962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.486 [2024-07-25 14:03:50.321484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.486 [2024-07-25 14:03:50.321815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.486 [2024-07-25 14:03:50.321835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:53.486 [2024-07-25 14:03:50.330094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.486 [2024-07-25 14:03:50.330486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.486 [2024-07-25 14:03:50.330506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:53.486 [2024-07-25 14:03:50.339102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.486 [2024-07-25 14:03:50.339434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.486 [2024-07-25 14:03:50.339454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:53.486 [2024-07-25 14:03:50.347421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.486 [2024-07-25 14:03:50.347694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.486 [2024-07-25 14:03:50.347719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.486 [2024-07-25 14:03:50.355450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.486 [2024-07-25 14:03:50.355781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.486 [2024-07-25 14:03:50.355801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:53.486 [2024-07-25 14:03:50.363669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.486 [2024-07-25 14:03:50.363999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.486 [2024-07-25 14:03:50.364019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:53.783 [2024-07-25 14:03:50.372155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.783 [2024-07-25 14:03:50.372578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.783 [2024-07-25 14:03:50.372598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:53.783 [2024-07-25 14:03:50.379887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.783 [2024-07-25 14:03:50.380224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.783 [2024-07-25 14:03:50.380245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.783 [2024-07-25 14:03:50.388308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.783 [2024-07-25 14:03:50.388647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.783 [2024-07-25 14:03:50.388668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:53.783 [2024-07-25 14:03:50.396881] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.783 [2024-07-25 14:03:50.397237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.783 [2024-07-25 14:03:50.397257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:53.783 [2024-07-25 14:03:50.404915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.783 [2024-07-25 14:03:50.405245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.783 [2024-07-25 14:03:50.405265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:53.783 [2024-07-25 14:03:50.413231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.783 [2024-07-25 14:03:50.413609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.783 [2024-07-25 14:03:50.413629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.783 [2024-07-25 14:03:50.421919] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.783 [2024-07-25 14:03:50.422304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.784 [2024-07-25 14:03:50.422324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:53.784 [2024-07-25 14:03:50.430000] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.784 [2024-07-25 14:03:50.430358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.784 [2024-07-25 14:03:50.430379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:53.784 [2024-07-25 14:03:50.438549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.784 [2024-07-25 14:03:50.438928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.784 [2024-07-25 14:03:50.438949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:53.784 [2024-07-25 14:03:50.446867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.784 [2024-07-25 14:03:50.447243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.784 [2024-07-25 14:03:50.447264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.784 [2024-07-25 14:03:50.455319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.784 [2024-07-25 14:03:50.455661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.784 [2024-07-25 14:03:50.455681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:53.784 [2024-07-25 14:03:50.462693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.784 [2024-07-25 14:03:50.463118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.784 [2024-07-25 14:03:50.463139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:53.784 [2024-07-25 14:03:50.470967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.784 [2024-07-25 14:03:50.471333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.784 [2024-07-25 14:03:50.471354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:53.784 [2024-07-25 14:03:50.479547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.784 [2024-07-25 14:03:50.479930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.784 [2024-07-25 14:03:50.479955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.784 [2024-07-25 14:03:50.487770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.784 [2024-07-25 14:03:50.488122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.784 [2024-07-25 14:03:50.488143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:53.784 [2024-07-25 14:03:50.495345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.784 [2024-07-25 14:03:50.495635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.784 [2024-07-25 14:03:50.495656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:53.784 [2024-07-25 14:03:50.503128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.784 [2024-07-25 14:03:50.503459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.784 [2024-07-25 14:03:50.503479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:53.784 [2024-07-25 14:03:50.510775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.784 [2024-07-25 14:03:50.511115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.784 [2024-07-25 14:03:50.511135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.784 [2024-07-25 14:03:50.519001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.784 [2024-07-25 14:03:50.519340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.784 [2024-07-25 14:03:50.519360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:53.784 [2024-07-25 14:03:50.527483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.784 [2024-07-25 14:03:50.527843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.784 [2024-07-25 14:03:50.527863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:53.784 [2024-07-25 14:03:50.534861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.784 [2024-07-25 14:03:50.535221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.784 [2024-07-25 14:03:50.535240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:53.784 [2024-07-25 14:03:50.542403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.784 [2024-07-25 14:03:50.542788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.784 [2024-07-25 14:03:50.542808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.784 [2024-07-25 14:03:50.550532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.784 [2024-07-25 14:03:50.550935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.784 [2024-07-25 14:03:50.550956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:53.784 [2024-07-25 14:03:50.558275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.784 [2024-07-25 14:03:50.558600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.784 [2024-07-25 14:03:50.558620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:53.784 [2024-07-25 14:03:50.566795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.784 [2024-07-25 14:03:50.567106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.784 [2024-07-25 14:03:50.567127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:53.784 [2024-07-25 14:03:50.575089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.784 [2024-07-25 14:03:50.575498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.784 [2024-07-25 14:03:50.575518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.784 [2024-07-25 14:03:50.583902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.784 [2024-07-25 14:03:50.584224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.784 [2024-07-25 14:03:50.584244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:53.784 [2024-07-25 14:03:50.592519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.784 [2024-07-25 14:03:50.592860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.784 [2024-07-25 14:03:50.592880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:53.784 [2024-07-25 14:03:50.601025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.784 [2024-07-25 14:03:50.601410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.784 [2024-07-25 14:03:50.601430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:53.784 [2024-07-25 14:03:50.608981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.784 [2024-07-25 14:03:50.609249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.784 [2024-07-25 14:03:50.609269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.784 [2024-07-25 14:03:50.617070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.784 [2024-07-25 14:03:50.617447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.784 [2024-07-25 14:03:50.617471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:53.785 [2024-07-25 14:03:50.625754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.785 [2024-07-25 14:03:50.626083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.785 [2024-07-25 14:03:50.626102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:53.785 [2024-07-25 14:03:50.634674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.785 [2024-07-25 14:03:50.635026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.785 [2024-07-25 14:03:50.635046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:53.785 [2024-07-25 14:03:50.643529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.785 [2024-07-25 14:03:50.643859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.785 [2024-07-25 14:03:50.643880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.785 [2024-07-25 14:03:50.652149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.785 [2024-07-25 14:03:50.652544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.785 [2024-07-25 14:03:50.652564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:53.785 [2024-07-25 14:03:50.659944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.785 [2024-07-25 14:03:50.660312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.785 [2024-07-25 14:03:50.660332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:53.785 [2024-07-25 14:03:50.668480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:53.785 [2024-07-25 14:03:50.668856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.785 [2024-07-25 14:03:50.668876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:54.046 [2024-07-25 14:03:50.676475] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.046 [2024-07-25 14:03:50.676847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.046 [2024-07-25 14:03:50.676867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.046 [2024-07-25 14:03:50.684146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.046 [2024-07-25 14:03:50.684590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.046 [2024-07-25 14:03:50.684609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:54.046 [2024-07-25 14:03:50.692032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.046 [2024-07-25 14:03:50.692399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.046 [2024-07-25 14:03:50.692419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:54.046 [2024-07-25 14:03:50.700236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.046 [2024-07-25 14:03:50.700590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.046 [2024-07-25 14:03:50.700611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:54.046 [2024-07-25 14:03:50.708817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.046 [2024-07-25 14:03:50.709202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.046 [2024-07-25 14:03:50.709222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.046 [2024-07-25 14:03:50.717033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.046 [2024-07-25 14:03:50.717424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.046 [2024-07-25 14:03:50.717444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:54.046 [2024-07-25 14:03:50.725548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.046 [2024-07-25 14:03:50.725920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.046 [2024-07-25 14:03:50.725941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:54.046 [2024-07-25 14:03:50.733892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.046 [2024-07-25 14:03:50.734235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.046 [2024-07-25 14:03:50.734255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:54.046 [2024-07-25 14:03:50.741371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.046 [2024-07-25 14:03:50.741733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.046 [2024-07-25 14:03:50.741753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.046 [2024-07-25 14:03:50.749480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.046 [2024-07-25 14:03:50.749874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.046 [2024-07-25 14:03:50.749895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:54.046 [2024-07-25 14:03:50.758009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.046 [2024-07-25 14:03:50.758383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.046 [2024-07-25 14:03:50.758404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:54.046 [2024-07-25 14:03:50.766908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.046 [2024-07-25 14:03:50.767226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.046 [2024-07-25 14:03:50.767246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:54.046 [2024-07-25 14:03:50.774080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.046 [2024-07-25 14:03:50.774393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.046 [2024-07-25 14:03:50.774413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.046 [2024-07-25 14:03:50.780188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.046 [2024-07-25 14:03:50.780472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.046 [2024-07-25 14:03:50.780492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:54.046 [2024-07-25 14:03:50.787275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.046 [2024-07-25 14:03:50.787665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.046 [2024-07-25 14:03:50.787685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:54.046 [2024-07-25 14:03:50.793927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.046 [2024-07-25 14:03:50.794279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.046 [2024-07-25 14:03:50.794299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:54.046 [2024-07-25 14:03:50.800742] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.046 [2024-07-25 14:03:50.801049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.046 [2024-07-25 14:03:50.801069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.046 [2024-07-25 14:03:50.807923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.046 [2024-07-25 14:03:50.808254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.046 [2024-07-25 14:03:50.808274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:54.046 [2024-07-25 14:03:50.813685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.046 [2024-07-25 14:03:50.813971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.046 [2024-07-25 14:03:50.813991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:54.046 [2024-07-25 14:03:50.819665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.046 [2024-07-25 14:03:50.819939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.046 [2024-07-25 14:03:50.819962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:54.046 [2024-07-25 14:03:50.826298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.046 [2024-07-25 14:03:50.826571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.046 [2024-07-25 14:03:50.826591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.046 [2024-07-25 14:03:50.832386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.046 [2024-07-25 14:03:50.832683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.046 [2024-07-25 14:03:50.832703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:54.046 [2024-07-25 14:03:50.838888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.047 [2024-07-25 14:03:50.839169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.047 [2024-07-25 14:03:50.839189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:54.047 [2024-07-25 14:03:50.845269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.047 [2024-07-25 14:03:50.845652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.047 [2024-07-25 14:03:50.845672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:54.047 [2024-07-25 14:03:50.852419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.047 [2024-07-25 14:03:50.852733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.047 [2024-07-25 14:03:50.852754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.047 [2024-07-25 14:03:50.859115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.047 [2024-07-25 14:03:50.859397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.047 [2024-07-25 14:03:50.859417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:54.047 [2024-07-25 14:03:50.865456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.047 [2024-07-25 14:03:50.865794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.047 [2024-07-25 14:03:50.865814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:54.047 [2024-07-25 14:03:50.872594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.047 [2024-07-25 14:03:50.872891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.047 [2024-07-25 14:03:50.872912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:54.047 [2024-07-25 14:03:50.879132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.047 [2024-07-25 14:03:50.879451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.047 [2024-07-25 14:03:50.879471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.047 [2024-07-25 14:03:50.886026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.047 [2024-07-25 14:03:50.886349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.047 [2024-07-25 14:03:50.886370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:54.047 [2024-07-25 14:03:50.893878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.047 [2024-07-25 14:03:50.894292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.047 [2024-07-25 14:03:50.894312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:54.047 [2024-07-25 14:03:50.902561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.047 [2024-07-25 14:03:50.902855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.047 [2024-07-25 14:03:50.902875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:54.047 [2024-07-25 14:03:50.910872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.047 [2024-07-25 14:03:50.911152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.047 [2024-07-25 14:03:50.911172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.047 [2024-07-25 14:03:50.919924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.047 [2024-07-25 14:03:50.920211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.047 [2024-07-25 14:03:50.920231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:54.047 [2024-07-25 14:03:50.927955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.047 [2024-07-25 14:03:50.928379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.047 [2024-07-25 14:03:50.928399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:54.308 [2024-07-25 14:03:50.936361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.308 [2024-07-25 14:03:50.936699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.308 [2024-07-25 14:03:50.936724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:54.308 [2024-07-25 14:03:50.944740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.308 [2024-07-25 14:03:50.945210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.308 [2024-07-25 14:03:50.945230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.308 [2024-07-25 14:03:50.953075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.308 [2024-07-25 14:03:50.953458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.308 [2024-07-25 14:03:50.953479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:54.308 [2024-07-25 14:03:50.962070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.308 [2024-07-25 14:03:50.962382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.308 [2024-07-25 14:03:50.962404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:54.308 [2024-07-25 14:03:50.970433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.308 [2024-07-25 14:03:50.970722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.308 [2024-07-25 14:03:50.970742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:54.308 [2024-07-25 14:03:50.979359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.308 [2024-07-25 14:03:50.979703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.308 [2024-07-25 14:03:50.979727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.308 [2024-07-25 14:03:50.988371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.308 [2024-07-25 14:03:50.988678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.308 [2024-07-25 14:03:50.988699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:54.308 [2024-07-25 14:03:50.996494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.308 [2024-07-25 14:03:50.996895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.308 [2024-07-25 14:03:50.996915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:54.308 [2024-07-25 14:03:51.004639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.308 [2024-07-25 14:03:51.004937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.308 [2024-07-25 14:03:51.004958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:54.308 [2024-07-25 14:03:51.012236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.308 [2024-07-25 14:03:51.012575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.308 [2024-07-25 14:03:51.012596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.308 [2024-07-25 14:03:51.020761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.308 [2024-07-25 14:03:51.021168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.308 [2024-07-25 14:03:51.021192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:54.308 [2024-07-25 14:03:51.029679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.308 [2024-07-25 14:03:51.030021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.308 [2024-07-25 14:03:51.030042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:54.308 [2024-07-25 14:03:51.037640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.308 [2024-07-25 14:03:51.038038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.308 [2024-07-25 14:03:51.038057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:54.308 [2024-07-25 14:03:51.044947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.308 [2024-07-25 14:03:51.045278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.308 [2024-07-25 14:03:51.045298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.308 [2024-07-25 14:03:51.051815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.308 [2024-07-25 14:03:51.052109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.308 [2024-07-25 14:03:51.052129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:54.308 [2024-07-25 14:03:51.058795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.308 [2024-07-25 14:03:51.059110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.308 [2024-07-25 14:03:51.059129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:54.308 [2024-07-25 14:03:51.064743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.308 [2024-07-25 14:03:51.065065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.308 [2024-07-25 14:03:51.065085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:54.308 [2024-07-25 14:03:51.072058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.308 [2024-07-25 14:03:51.072436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.308 [2024-07-25 14:03:51.072456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.308 [2024-07-25 14:03:51.079650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.308 [2024-07-25 14:03:51.079984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.308 [2024-07-25 14:03:51.080004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:54.308 [2024-07-25 14:03:51.087040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.308 [2024-07-25 14:03:51.087394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.308 [2024-07-25 14:03:51.087414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:54.308 [2024-07-25 14:03:51.095407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.308 [2024-07-25 14:03:51.095770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.309 [2024-07-25 14:03:51.095790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:54.309 [2024-07-25 14:03:51.102493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.309 [2024-07-25 14:03:51.102803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.309 [2024-07-25 14:03:51.102823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.309 [2024-07-25 14:03:51.108174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.309 [2024-07-25 14:03:51.108447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.309 [2024-07-25 14:03:51.108468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:54.309 [2024-07-25 14:03:51.114842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.309 [2024-07-25 14:03:51.115178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.309 [2024-07-25 14:03:51.115197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:54.309 [2024-07-25 14:03:51.121597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.309 [2024-07-25 14:03:51.121905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.309 [2024-07-25 14:03:51.121925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:54.309 [2024-07-25 14:03:51.128369] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.309 [2024-07-25 14:03:51.128634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.309 [2024-07-25 14:03:51.128654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.309 [2024-07-25 14:03:51.135182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.309 [2024-07-25 14:03:51.135494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.309 [2024-07-25 14:03:51.135514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:54.309 [2024-07-25 14:03:51.143260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.309 [2024-07-25 14:03:51.143569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.309 [2024-07-25 14:03:51.143592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:54.309 [2024-07-25 14:03:51.151497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.309 [2024-07-25 14:03:51.151823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.309 [2024-07-25 14:03:51.151843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:54.309 [2024-07-25 14:03:51.160325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.309 [2024-07-25 14:03:51.160593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.309 [2024-07-25 14:03:51.160613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.309 [2024-07-25 14:03:51.168598] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.309 [2024-07-25 14:03:51.168883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.309 [2024-07-25 14:03:51.168904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:54.309 [2024-07-25 14:03:51.176830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.309 [2024-07-25 14:03:51.177149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.309 [2024-07-25 14:03:51.177170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:54.309 [2024-07-25 14:03:51.185449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.309 [2024-07-25 14:03:51.185724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.309 [2024-07-25 14:03:51.185744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:54.309 [2024-07-25 14:03:51.193635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.309 [2024-07-25 14:03:51.193948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.309 [2024-07-25 14:03:51.193969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.568 [2024-07-25 14:03:51.202170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.568 [2024-07-25 14:03:51.202441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.568 [2024-07-25 14:03:51.202461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:54.568 [2024-07-25 14:03:51.210485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x104e050) with pdu=0x2000190fef90 00:35:54.569 [2024-07-25 14:03:51.210752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.569 [2024-07-25 14:03:51.210773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:54.569 00:35:54.569 Latency(us) 00:35:54.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:54.569 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:54.569 nvme0n1 : 2.00 3624.37 453.05 0.00 0.00 4407.49 2569.01 18664.65 00:35:54.569 =================================================================================================================== 00:35:54.569 Total : 3624.37 453.05 0.00 0.00 4407.49 2569.01 18664.65 00:35:54.569 0 00:35:54.569 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:54.569 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:54.569 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:54.569 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:54.569 | .driver_specific 00:35:54.569 | .nvme_error 00:35:54.569 | .status_code 00:35:54.569 | .command_transient_transport_error' 00:35:54.569 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 234 > 0 )) 00:35:54.569 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 504989 00:35:54.569 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 504989 ']' 00:35:54.569 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 504989 00:35:54.569 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:54.569 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:54.569 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 504989 00:35:54.829 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:54.829 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:54.829 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 504989' 00:35:54.829 killing process with pid 504989 00:35:54.829 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 504989 00:35:54.829 Received shutdown signal, test time was about 2.000000 seconds 00:35:54.829 00:35:54.829 Latency(us) 00:35:54.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:54.829 =================================================================================================================== 00:35:54.829 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:54.829 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 504989 00:35:54.829 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 503091 00:35:54.829 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 503091 ']' 00:35:54.829 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 503091 00:35:54.829 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:54.829 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:54.829 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 503091 00:35:54.829 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:54.829 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:54.829 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 503091' 00:35:54.829 killing process with pid 503091 00:35:54.829 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 503091 00:35:54.829 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 503091 00:35:55.089 00:35:55.089 real 0m14.707s 00:35:55.089 user 0m27.232s 00:35:55.089 sys 0m4.652s 00:35:55.089 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:55.089 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:55.089 ************************************ 00:35:55.089 END TEST nvmf_digest_error 00:35:55.089 ************************************ 00:35:55.090 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:55.090 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:55.090 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:55.090 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:35:55.090 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:55.090 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:35:55.090 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:55.090 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:55.090 rmmod nvme_tcp 00:35:55.090 rmmod nvme_fabrics 00:35:55.090 rmmod nvme_keyring 00:35:55.090 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:55.090 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:35:55.090 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:35:55.090 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 503091 ']' 00:35:55.090 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 503091 00:35:55.090 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 503091 ']' 00:35:55.090 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 503091 00:35:55.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (503091) - No such process 00:35:55.090 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 503091 is not found' 00:35:55.090 Process with pid 503091 is not found 00:35:55.090 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:55.090 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:55.090 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:55.090 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:55.090 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:55.090 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:55.090 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:55.090 14:03:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:57.630 14:03:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:57.630 00:35:57.630 real 0m37.696s 00:35:57.630 user 0m54.698s 00:35:57.630 sys 0m14.623s 00:35:57.630 14:03:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:57.630 14:03:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:57.630 ************************************ 00:35:57.630 END TEST nvmf_digest 00:35:57.630 ************************************ 00:35:57.630 14:03:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:57.630 14:03:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:57.630 14:03:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:57.630 14:03:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:57.630 14:03:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:57.630 14:03:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:57.630 14:03:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.630 ************************************ 00:35:57.630 START TEST nvmf_bdevperf 00:35:57.630 ************************************ 00:35:57.630 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:57.630 * Looking for test storage... 00:35:57.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:57.630 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:35:57.631 14:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:04.206 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:04.206 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:04.206 Found net devices under 0000:af:00.0: cvl_0_0 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:04.206 Found net devices under 0000:af:00.1: cvl_0_1 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:04.206 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:04.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:04.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:36:04.207 00:36:04.207 --- 10.0.0.2 ping statistics --- 00:36:04.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:04.207 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:04.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:04.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:36:04.207 00:36:04.207 --- 10.0.0.1 ping statistics --- 00:36:04.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:04.207 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=509209 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 509209 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 509209 ']' 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:04.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:04.207 14:04:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:04.207 [2024-07-25 14:04:01.020158] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:36:04.207 [2024-07-25 14:04:01.020208] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:04.207 EAL: No free 2048 kB hugepages reported on node 1 00:36:04.207 [2024-07-25 14:04:01.060264] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:04.466 [2024-07-25 14:04:01.095349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:04.466 [2024-07-25 14:04:01.134866] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:04.466 [2024-07-25 14:04:01.134909] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:04.466 [2024-07-25 14:04:01.134919] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:04.466 [2024-07-25 14:04:01.134927] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:04.466 [2024-07-25 14:04:01.134935] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:04.466 [2024-07-25 14:04:01.135037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:04.466 [2024-07-25 14:04:01.135123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:04.466 [2024-07-25 14:04:01.135125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:05.034 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:05.034 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:36:05.034 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:05.034 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:05.034 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:05.034 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:05.034 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:05.034 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.034 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:05.034 [2024-07-25 14:04:01.869473] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:05.034 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.034 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:05.034 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.034 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:05.034 Malloc0 00:36:05.034 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.034 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:05.034 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.034 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:05.034 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.034 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:05.034 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.034 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:05.294 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.294 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:05.294 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.294 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:05.294 [2024-07-25 14:04:01.931029] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:05.294 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.294 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:36:05.294 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:36:05.294 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:36:05.294 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:36:05.294 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:05.294 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:05.294 { 00:36:05.294 "params": { 00:36:05.294 "name": "Nvme$subsystem", 00:36:05.294 "trtype": "$TEST_TRANSPORT", 00:36:05.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:05.294 "adrfam": "ipv4", 00:36:05.294 "trsvcid": "$NVMF_PORT", 00:36:05.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:05.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:05.294 "hdgst": ${hdgst:-false}, 00:36:05.294 "ddgst": ${ddgst:-false} 00:36:05.294 }, 00:36:05.294 "method": "bdev_nvme_attach_controller" 00:36:05.294 } 00:36:05.294 EOF 00:36:05.294 )") 00:36:05.294 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:36:05.294 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:36:05.294 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:36:05.294 14:04:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:05.294 "params": { 00:36:05.294 "name": "Nvme1", 00:36:05.294 "trtype": "tcp", 00:36:05.294 "traddr": "10.0.0.2", 00:36:05.294 "adrfam": "ipv4", 00:36:05.294 "trsvcid": "4420", 00:36:05.294 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:05.294 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:05.294 "hdgst": false, 00:36:05.294 "ddgst": false 00:36:05.294 }, 00:36:05.294 "method": "bdev_nvme_attach_controller" 00:36:05.294 }' 00:36:05.294 [2024-07-25 14:04:01.982202] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:36:05.294 [2024-07-25 14:04:01.982257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid509359 ] 00:36:05.294 EAL: No free 2048 kB hugepages reported on node 1 00:36:05.294 [2024-07-25 14:04:02.018428] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:05.294 [2024-07-25 14:04:02.054663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:05.294 [2024-07-25 14:04:02.093050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:05.553 Running I/O for 1 seconds... 00:36:06.489 00:36:06.489 Latency(us) 00:36:06.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:06.489 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:06.489 Verification LBA range: start 0x0 length 0x4000 00:36:06.489 Nvme1n1 : 1.00 12220.95 47.74 0.00 0.00 10425.74 1566.31 17406.36 00:36:06.489 =================================================================================================================== 00:36:06.489 Total : 12220.95 47.74 0.00 0.00 10425.74 1566.31 17406.36 00:36:06.747 14:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=509567 00:36:06.747 14:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:36:06.747 14:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:36:06.747 14:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:36:06.747 14:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:36:06.747 14:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:36:06.747 14:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:06.747 14:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:06.747 { 00:36:06.747 "params": { 00:36:06.747 "name": "Nvme$subsystem", 00:36:06.747 "trtype": "$TEST_TRANSPORT", 00:36:06.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:06.747 "adrfam": "ipv4", 00:36:06.747 "trsvcid": "$NVMF_PORT", 00:36:06.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:06.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:06.747 "hdgst": ${hdgst:-false}, 00:36:06.747 "ddgst": ${ddgst:-false} 00:36:06.747 }, 00:36:06.747 "method": "bdev_nvme_attach_controller" 00:36:06.747 } 00:36:06.747 EOF 00:36:06.747 )") 00:36:06.747 14:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:36:06.747 14:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:36:06.747 14:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:36:06.747 14:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:06.747 "params": { 00:36:06.747 "name": "Nvme1", 00:36:06.747 "trtype": "tcp", 00:36:06.747 "traddr": "10.0.0.2", 00:36:06.747 "adrfam": "ipv4", 00:36:06.747 "trsvcid": "4420", 00:36:06.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:06.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:06.747 "hdgst": false, 00:36:06.747 "ddgst": false 00:36:06.747 }, 00:36:06.747 "method": "bdev_nvme_attach_controller" 00:36:06.747 }' 00:36:06.747 [2024-07-25 14:04:03.466071] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:36:06.747 [2024-07-25 14:04:03.466126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid509567 ] 00:36:06.747 EAL: No free 2048 kB hugepages reported on node 1 00:36:06.747 [2024-07-25 14:04:03.504115] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:06.747 [2024-07-25 14:04:03.537766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:06.747 [2024-07-25 14:04:03.575277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:07.005 Running I/O for 15 seconds... 00:36:09.540 14:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 509209 00:36:09.540 14:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:36:09.802 [2024-07-25 14:04:06.436869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.802 [2024-07-25 14:04:06.436908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.802 [2024-07-25 14:04:06.436926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.802 [2024-07-25 14:04:06.436937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.802 [2024-07-25 14:04:06.436951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.802 [2024-07-25 14:04:06.436961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.802 [2024-07-25 14:04:06.436974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.802 [2024-07-25 14:04:06.436984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.802 [2024-07-25 14:04:06.436996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.802 [2024-07-25 14:04:06.437008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.802 [2024-07-25 14:04:06.437020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.802 [2024-07-25 14:04:06.437029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.802 [2024-07-25 14:04:06.437041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.802 [2024-07-25 14:04:06.437051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.802 [2024-07-25 14:04:06.437070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.802 [2024-07-25 14:04:06.437083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.802 [2024-07-25 14:04:06.437095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.802 [2024-07-25 14:04:06.437107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.802 [2024-07-25 14:04:06.437121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.802 [2024-07-25 14:04:06.437132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.802 [2024-07-25 14:04:06.437144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.802 [2024-07-25 14:04:06.437155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.802 [2024-07-25 14:04:06.437170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.802 [2024-07-25 14:04:06.437180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.802 [2024-07-25 14:04:06.437191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.802 [2024-07-25 14:04:06.437200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.802 [2024-07-25 14:04:06.437211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.802 [2024-07-25 14:04:06.437220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.802 [2024-07-25 14:04:06.437231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:112752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.802 [2024-07-25 14:04:06.437240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.802 [2024-07-25 14:04:06.437251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.802 [2024-07-25 14:04:06.437262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.802 [2024-07-25 14:04:06.437273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.802 [2024-07-25 14:04:06.437283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.802 [2024-07-25 14:04:06.437295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.437304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.437325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.437347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.437368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.437391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.437411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.437431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.437451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.437471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.437491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.437511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.437531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.437550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.437570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.437589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.437610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.437630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.437650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.437670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.437690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.437710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.437738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.803 [2024-07-25 14:04:06.437758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.803 [2024-07-25 14:04:06.437778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.803 [2024-07-25 14:04:06.437798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.803 [2024-07-25 14:04:06.437818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.803 [2024-07-25 14:04:06.437839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.803 [2024-07-25 14:04:06.437859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.803 [2024-07-25 14:04:06.437880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.803 [2024-07-25 14:04:06.437900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.437920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.437939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.437959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.437979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.437990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.437999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.438009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.438018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.438029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.438039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.438049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.438058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.803 [2024-07-25 14:04:06.438069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.803 [2024-07-25 14:04:06.438078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.804 [2024-07-25 14:04:06.438837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.804 [2024-07-25 14:04:06.438846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.438857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.438865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.438877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.438887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.438897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.438906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.438920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.438931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.438941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.438951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.438962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.438970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.438981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.438990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.439010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.439029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.439049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.439069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.439089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.439108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.439130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.439149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.439169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.439189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.439208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.439228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.439249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.439269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.439289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.439309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.439328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.805 [2024-07-25 14:04:06.439348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.805 [2024-07-25 14:04:06.439368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.439388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.439408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.439428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.439447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.439467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.439487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.805 [2024-07-25 14:04:06.439506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c63e0 is same with the state(5) to be set 00:36:09.805 [2024-07-25 14:04:06.439528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:09.805 [2024-07-25 14:04:06.439536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:09.805 [2024-07-25 14:04:06.439543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113576 len:8 PRP1 0x0 PRP2 0x0 00:36:09.805 [2024-07-25 14:04:06.439553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.805 [2024-07-25 14:04:06.439597] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14c63e0 was disconnected and freed. reset controller. 00:36:09.805 [2024-07-25 14:04:06.442305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:09.805 [2024-07-25 14:04:06.442356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:09.805 [2024-07-25 14:04:06.442942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.805 [2024-07-25 14:04:06.442961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:09.805 [2024-07-25 14:04:06.442971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:09.805 [2024-07-25 14:04:06.443142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:09.805 [2024-07-25 14:04:06.443313] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:09.806 [2024-07-25 14:04:06.443326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:09.806 [2024-07-25 14:04:06.443337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:09.806 [2024-07-25 14:04:06.446013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:09.806 [2024-07-25 14:04:06.455423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:09.806 [2024-07-25 14:04:06.455958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.806 [2024-07-25 14:04:06.456014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:09.806 [2024-07-25 14:04:06.456047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:09.806 [2024-07-25 14:04:06.456456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:09.806 [2024-07-25 14:04:06.456614] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:09.806 [2024-07-25 14:04:06.456625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:09.806 [2024-07-25 14:04:06.456634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:09.806 [2024-07-25 14:04:06.459180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:09.806 [2024-07-25 14:04:06.468093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:09.806 [2024-07-25 14:04:06.468607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.806 [2024-07-25 14:04:06.468660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:09.806 [2024-07-25 14:04:06.468693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:09.806 [2024-07-25 14:04:06.469300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:09.806 [2024-07-25 14:04:06.469651] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:09.806 [2024-07-25 14:04:06.469663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:09.806 [2024-07-25 14:04:06.469672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:09.806 [2024-07-25 14:04:06.472206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:09.806 [2024-07-25 14:04:06.480847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:09.806 [2024-07-25 14:04:06.481342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.806 [2024-07-25 14:04:06.481360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:09.806 [2024-07-25 14:04:06.481369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:09.806 [2024-07-25 14:04:06.481526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:09.806 [2024-07-25 14:04:06.481682] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:09.806 [2024-07-25 14:04:06.481693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:09.806 [2024-07-25 14:04:06.481701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:09.806 [2024-07-25 14:04:06.484248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:09.806 [2024-07-25 14:04:06.493548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:09.806 [2024-07-25 14:04:06.494068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.806 [2024-07-25 14:04:06.494103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:09.806 [2024-07-25 14:04:06.494112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:09.806 [2024-07-25 14:04:06.494269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:09.806 [2024-07-25 14:04:06.494426] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:09.806 [2024-07-25 14:04:06.494436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:09.806 [2024-07-25 14:04:06.494445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:09.806 [2024-07-25 14:04:06.496982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:09.806 [2024-07-25 14:04:06.506272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:09.806 [2024-07-25 14:04:06.506796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.806 [2024-07-25 14:04:06.506848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:09.806 [2024-07-25 14:04:06.506881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:09.806 [2024-07-25 14:04:06.507469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:09.806 [2024-07-25 14:04:06.507683] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:09.806 [2024-07-25 14:04:06.507694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:09.806 [2024-07-25 14:04:06.507702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:09.806 [2024-07-25 14:04:06.510276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:09.806 [2024-07-25 14:04:06.519066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:09.806 [2024-07-25 14:04:06.519607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.806 [2024-07-25 14:04:06.519658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:09.806 [2024-07-25 14:04:06.519691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:09.806 [2024-07-25 14:04:06.520295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:09.806 [2024-07-25 14:04:06.520662] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:09.806 [2024-07-25 14:04:06.520673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:09.806 [2024-07-25 14:04:06.520682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:09.806 [2024-07-25 14:04:06.523322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:09.806 [2024-07-25 14:04:06.532040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:09.806 [2024-07-25 14:04:06.532490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.806 [2024-07-25 14:04:06.532509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:09.806 [2024-07-25 14:04:06.532518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:09.806 [2024-07-25 14:04:06.532688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:09.806 [2024-07-25 14:04:06.532860] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:09.806 [2024-07-25 14:04:06.532872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:09.806 [2024-07-25 14:04:06.532880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:09.806 [2024-07-25 14:04:06.535510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:09.806 [2024-07-25 14:04:06.545031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:09.806 [2024-07-25 14:04:06.545550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.806 [2024-07-25 14:04:06.545571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:09.806 [2024-07-25 14:04:06.545581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:09.806 [2024-07-25 14:04:06.545756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:09.806 [2024-07-25 14:04:06.545921] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:09.806 [2024-07-25 14:04:06.545933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:09.806 [2024-07-25 14:04:06.545942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:09.806 [2024-07-25 14:04:06.548529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:09.806 [2024-07-25 14:04:06.557864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:09.806 [2024-07-25 14:04:06.558323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.806 [2024-07-25 14:04:06.558365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:09.806 [2024-07-25 14:04:06.558399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:09.806 [2024-07-25 14:04:06.559008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:09.806 [2024-07-25 14:04:06.559589] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:09.806 [2024-07-25 14:04:06.559600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:09.806 [2024-07-25 14:04:06.559609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:09.806 [2024-07-25 14:04:06.562153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:09.806 [2024-07-25 14:04:06.570588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:09.806 [2024-07-25 14:04:06.571095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.806 [2024-07-25 14:04:06.571150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:09.806 [2024-07-25 14:04:06.571182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:09.806 [2024-07-25 14:04:06.571630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:09.806 [2024-07-25 14:04:06.571811] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:09.807 [2024-07-25 14:04:06.571823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:09.807 [2024-07-25 14:04:06.571835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:09.807 [2024-07-25 14:04:06.574355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:09.807 [2024-07-25 14:04:06.583522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:09.807 [2024-07-25 14:04:06.584061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.807 [2024-07-25 14:04:06.584080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:09.807 [2024-07-25 14:04:06.584091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:09.807 [2024-07-25 14:04:06.584261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:09.807 [2024-07-25 14:04:06.584431] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:09.807 [2024-07-25 14:04:06.584442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:09.807 [2024-07-25 14:04:06.584452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:09.807 [2024-07-25 14:04:06.587133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:09.807 [2024-07-25 14:04:06.596292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:09.807 [2024-07-25 14:04:06.596752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.807 [2024-07-25 14:04:06.596805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:09.807 [2024-07-25 14:04:06.596838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:09.807 [2024-07-25 14:04:06.597430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:09.807 [2024-07-25 14:04:06.597615] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:09.807 [2024-07-25 14:04:06.597626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:09.807 [2024-07-25 14:04:06.597635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:09.807 [2024-07-25 14:04:06.600179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:09.807 [2024-07-25 14:04:06.609109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:09.807 [2024-07-25 14:04:06.610169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.807 [2024-07-25 14:04:06.610195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:09.807 [2024-07-25 14:04:06.610206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:09.807 [2024-07-25 14:04:06.610373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:09.807 [2024-07-25 14:04:06.610530] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:09.807 [2024-07-25 14:04:06.610542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:09.807 [2024-07-25 14:04:06.610551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:09.807 [2024-07-25 14:04:06.613102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:09.807 [2024-07-25 14:04:06.621905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:09.807 [2024-07-25 14:04:06.622392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.807 [2024-07-25 14:04:06.622455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:09.807 [2024-07-25 14:04:06.622489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:09.807 [2024-07-25 14:04:06.622853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:09.807 [2024-07-25 14:04:06.623020] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:09.807 [2024-07-25 14:04:06.623031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:09.807 [2024-07-25 14:04:06.623040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:09.807 [2024-07-25 14:04:06.625600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:09.807 [2024-07-25 14:04:06.634668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:09.807 [2024-07-25 14:04:06.635117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.807 [2024-07-25 14:04:06.635170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:09.807 [2024-07-25 14:04:06.635203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:09.807 [2024-07-25 14:04:06.635725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:09.807 [2024-07-25 14:04:06.635883] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:09.807 [2024-07-25 14:04:06.635894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:09.807 [2024-07-25 14:04:06.635903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:09.807 [2024-07-25 14:04:06.638441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:09.807 [2024-07-25 14:04:06.647537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:09.807 [2024-07-25 14:04:06.648060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.807 [2024-07-25 14:04:06.648113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:09.807 [2024-07-25 14:04:06.648145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:09.807 [2024-07-25 14:04:06.648644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:09.807 [2024-07-25 14:04:06.648807] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:09.807 [2024-07-25 14:04:06.648819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:09.807 [2024-07-25 14:04:06.648828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:09.807 [2024-07-25 14:04:06.651331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:09.807 [2024-07-25 14:04:06.660380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:09.807 [2024-07-25 14:04:06.660903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.807 [2024-07-25 14:04:06.660922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:09.807 [2024-07-25 14:04:06.660931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:09.807 [2024-07-25 14:04:06.661097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:09.807 [2024-07-25 14:04:06.661266] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:09.807 [2024-07-25 14:04:06.661277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:09.807 [2024-07-25 14:04:06.661286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:09.807 [2024-07-25 14:04:06.663845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:09.808 [2024-07-25 14:04:06.673197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:09.808 [2024-07-25 14:04:06.673655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.808 [2024-07-25 14:04:06.673707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:09.808 [2024-07-25 14:04:06.673755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:09.808 [2024-07-25 14:04:06.674182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:09.808 [2024-07-25 14:04:06.674349] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:09.808 [2024-07-25 14:04:06.674360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:09.808 [2024-07-25 14:04:06.674369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:09.808 [2024-07-25 14:04:06.676873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:09.808 [2024-07-25 14:04:06.686046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:09.808 [2024-07-25 14:04:06.686496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.808 [2024-07-25 14:04:06.686514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:09.808 [2024-07-25 14:04:06.686524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:09.808 [2024-07-25 14:04:06.686690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:09.808 [2024-07-25 14:04:06.686861] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:09.808 [2024-07-25 14:04:06.686873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:09.808 [2024-07-25 14:04:06.686892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.069 [2024-07-25 14:04:06.689504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.069 [2024-07-25 14:04:06.698951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.069 [2024-07-25 14:04:06.699499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.069 [2024-07-25 14:04:06.699518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.069 [2024-07-25 14:04:06.699529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.069 [2024-07-25 14:04:06.699701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.069 [2024-07-25 14:04:06.699878] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.069 [2024-07-25 14:04:06.699890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.069 [2024-07-25 14:04:06.699900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.069 [2024-07-25 14:04:06.702578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.069 [2024-07-25 14:04:06.711865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.069 [2024-07-25 14:04:06.712387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.069 [2024-07-25 14:04:06.712406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.069 [2024-07-25 14:04:06.712416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.069 [2024-07-25 14:04:06.712586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.069 [2024-07-25 14:04:06.712763] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.069 [2024-07-25 14:04:06.712775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.069 [2024-07-25 14:04:06.712785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.069 [2024-07-25 14:04:06.715450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.069 [2024-07-25 14:04:06.724875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.069 [2024-07-25 14:04:06.725344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.069 [2024-07-25 14:04:06.725363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.069 [2024-07-25 14:04:06.725372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.069 [2024-07-25 14:04:06.725541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.069 [2024-07-25 14:04:06.725711] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.069 [2024-07-25 14:04:06.725727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.069 [2024-07-25 14:04:06.725736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.069 [2024-07-25 14:04:06.728554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.069 [2024-07-25 14:04:06.737946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.069 [2024-07-25 14:04:06.738484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.069 [2024-07-25 14:04:06.738503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.069 [2024-07-25 14:04:06.738513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.069 [2024-07-25 14:04:06.738694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.069 [2024-07-25 14:04:06.738883] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.069 [2024-07-25 14:04:06.738896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.069 [2024-07-25 14:04:06.738905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.069 [2024-07-25 14:04:06.741723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.069 [2024-07-25 14:04:06.751021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.069 [2024-07-25 14:04:06.751555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.069 [2024-07-25 14:04:06.751575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.069 [2024-07-25 14:04:06.751588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.069 [2024-07-25 14:04:06.751774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.069 [2024-07-25 14:04:06.751955] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.069 [2024-07-25 14:04:06.751967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.069 [2024-07-25 14:04:06.751977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.069 [2024-07-25 14:04:06.754772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.069 [2024-07-25 14:04:06.764055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.069 [2024-07-25 14:04:06.764510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.069 [2024-07-25 14:04:06.764529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.069 [2024-07-25 14:04:06.764539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.069 [2024-07-25 14:04:06.764708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.069 [2024-07-25 14:04:06.764884] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.069 [2024-07-25 14:04:06.764896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.069 [2024-07-25 14:04:06.764905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.069 [2024-07-25 14:04:06.767571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.069 [2024-07-25 14:04:06.777030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.069 [2024-07-25 14:04:06.777505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.069 [2024-07-25 14:04:06.777524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.069 [2024-07-25 14:04:06.777534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.069 [2024-07-25 14:04:06.777705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.069 [2024-07-25 14:04:06.777880] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.069 [2024-07-25 14:04:06.777892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.069 [2024-07-25 14:04:06.777901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.069 [2024-07-25 14:04:06.780569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.069 [2024-07-25 14:04:06.790025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.069 [2024-07-25 14:04:06.790552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.069 [2024-07-25 14:04:06.790571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.069 [2024-07-25 14:04:06.790580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.069 [2024-07-25 14:04:06.790756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.070 [2024-07-25 14:04:06.790926] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.070 [2024-07-25 14:04:06.790941] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.070 [2024-07-25 14:04:06.790950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.070 [2024-07-25 14:04:06.793612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.070 [2024-07-25 14:04:06.802913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.070 [2024-07-25 14:04:06.803445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.070 [2024-07-25 14:04:06.803464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.070 [2024-07-25 14:04:06.803474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.070 [2024-07-25 14:04:06.803643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.070 [2024-07-25 14:04:06.803818] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.070 [2024-07-25 14:04:06.803830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.070 [2024-07-25 14:04:06.803839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.070 [2024-07-25 14:04:06.806580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.070 [2024-07-25 14:04:06.815876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.070 [2024-07-25 14:04:06.816402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.070 [2024-07-25 14:04:06.816421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.070 [2024-07-25 14:04:06.816431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.070 [2024-07-25 14:04:06.816599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.070 [2024-07-25 14:04:06.816776] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.070 [2024-07-25 14:04:06.816787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.070 [2024-07-25 14:04:06.816796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.070 [2024-07-25 14:04:06.819461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.070 [2024-07-25 14:04:06.828757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.070 [2024-07-25 14:04:06.829284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.070 [2024-07-25 14:04:06.829303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.070 [2024-07-25 14:04:06.829313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.070 [2024-07-25 14:04:06.829483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.070 [2024-07-25 14:04:06.829652] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.070 [2024-07-25 14:04:06.829663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.070 [2024-07-25 14:04:06.829673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.070 [2024-07-25 14:04:06.832343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.070 [2024-07-25 14:04:06.841637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.070 [2024-07-25 14:04:06.842094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.070 [2024-07-25 14:04:06.842113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.070 [2024-07-25 14:04:06.842124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.070 [2024-07-25 14:04:06.842294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.070 [2024-07-25 14:04:06.842463] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.070 [2024-07-25 14:04:06.842474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.070 [2024-07-25 14:04:06.842483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.070 [2024-07-25 14:04:06.845149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.070 [2024-07-25 14:04:06.854590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.070 [2024-07-25 14:04:06.855119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.070 [2024-07-25 14:04:06.855137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.070 [2024-07-25 14:04:06.855147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.070 [2024-07-25 14:04:06.855317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.070 [2024-07-25 14:04:06.855487] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.070 [2024-07-25 14:04:06.855498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.070 [2024-07-25 14:04:06.855507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.070 [2024-07-25 14:04:06.858176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.070 [2024-07-25 14:04:06.867603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.070 [2024-07-25 14:04:06.868124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.070 [2024-07-25 14:04:06.868142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.070 [2024-07-25 14:04:06.868152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.070 [2024-07-25 14:04:06.868321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.070 [2024-07-25 14:04:06.868491] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.070 [2024-07-25 14:04:06.868502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.070 [2024-07-25 14:04:06.868512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.070 [2024-07-25 14:04:06.871223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.070 [2024-07-25 14:04:06.880528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.070 [2024-07-25 14:04:06.880896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.070 [2024-07-25 14:04:06.880915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.070 [2024-07-25 14:04:06.880928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.070 [2024-07-25 14:04:06.881098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.070 [2024-07-25 14:04:06.881268] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.070 [2024-07-25 14:04:06.881279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.070 [2024-07-25 14:04:06.881289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.070 [2024-07-25 14:04:06.883963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.070 [2024-07-25 14:04:06.893402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.070 [2024-07-25 14:04:06.893908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-25 14:04:06.893927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.071 [2024-07-25 14:04:06.893937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.071 [2024-07-25 14:04:06.894106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.071 [2024-07-25 14:04:06.894276] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.071 [2024-07-25 14:04:06.894287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.071 [2024-07-25 14:04:06.894296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.071 [2024-07-25 14:04:06.896959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.071 [2024-07-25 14:04:06.906415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.071 [2024-07-25 14:04:06.906942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-25 14:04:06.906961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.071 [2024-07-25 14:04:06.906971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.071 [2024-07-25 14:04:06.907141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.071 [2024-07-25 14:04:06.907311] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.071 [2024-07-25 14:04:06.907322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.071 [2024-07-25 14:04:06.907332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.071 [2024-07-25 14:04:06.910004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.071 [2024-07-25 14:04:06.919289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.071 [2024-07-25 14:04:06.919753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-25 14:04:06.919772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.071 [2024-07-25 14:04:06.919782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.071 [2024-07-25 14:04:06.919953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.071 [2024-07-25 14:04:06.920123] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.071 [2024-07-25 14:04:06.920134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.071 [2024-07-25 14:04:06.920146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.071 [2024-07-25 14:04:06.922821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.071 [2024-07-25 14:04:06.932272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.071 [2024-07-25 14:04:06.932726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-25 14:04:06.932745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.071 [2024-07-25 14:04:06.932754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.071 [2024-07-25 14:04:06.932925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.071 [2024-07-25 14:04:06.933094] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.071 [2024-07-25 14:04:06.933105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.071 [2024-07-25 14:04:06.933115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.071 [2024-07-25 14:04:06.935786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.071 [2024-07-25 14:04:06.945235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.071 [2024-07-25 14:04:06.945760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-25 14:04:06.945778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.071 [2024-07-25 14:04:06.945789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.071 [2024-07-25 14:04:06.945958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.071 [2024-07-25 14:04:06.946128] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.071 [2024-07-25 14:04:06.946139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.071 [2024-07-25 14:04:06.946148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.071 [2024-07-25 14:04:06.948821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.332 [2024-07-25 14:04:06.958126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.332 [2024-07-25 14:04:06.958648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.332 [2024-07-25 14:04:06.958666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.332 [2024-07-25 14:04:06.958676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.332 [2024-07-25 14:04:06.958853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.332 [2024-07-25 14:04:06.959024] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.332 [2024-07-25 14:04:06.959035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.332 [2024-07-25 14:04:06.959044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.332 [2024-07-25 14:04:06.961701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.332 [2024-07-25 14:04:06.971153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.332 [2024-07-25 14:04:06.971684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.332 [2024-07-25 14:04:06.971702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.332 [2024-07-25 14:04:06.971712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.332 [2024-07-25 14:04:06.971885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.332 [2024-07-25 14:04:06.972055] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.332 [2024-07-25 14:04:06.972067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.332 [2024-07-25 14:04:06.972076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.332 [2024-07-25 14:04:06.974751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.332 [2024-07-25 14:04:06.984026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.332 [2024-07-25 14:04:06.984536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.332 [2024-07-25 14:04:06.984554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.332 [2024-07-25 14:04:06.984564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.332 [2024-07-25 14:04:06.984742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.332 [2024-07-25 14:04:06.984912] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.332 [2024-07-25 14:04:06.984924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.332 [2024-07-25 14:04:06.984933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.332 [2024-07-25 14:04:06.987592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.333 [2024-07-25 14:04:06.997044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.333 [2024-07-25 14:04:06.997567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.333 [2024-07-25 14:04:06.997585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.333 [2024-07-25 14:04:06.997595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.333 [2024-07-25 14:04:06.997770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.333 [2024-07-25 14:04:06.997941] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.333 [2024-07-25 14:04:06.997952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.333 [2024-07-25 14:04:06.997961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.333 [2024-07-25 14:04:07.000625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.333 [2024-07-25 14:04:07.009920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.333 [2024-07-25 14:04:07.010443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.333 [2024-07-25 14:04:07.010462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.333 [2024-07-25 14:04:07.010472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.333 [2024-07-25 14:04:07.010646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.333 [2024-07-25 14:04:07.010823] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.333 [2024-07-25 14:04:07.010835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.333 [2024-07-25 14:04:07.010844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.333 [2024-07-25 14:04:07.013507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.333 [2024-07-25 14:04:07.022782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.333 [2024-07-25 14:04:07.023309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.333 [2024-07-25 14:04:07.023327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.333 [2024-07-25 14:04:07.023337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.333 [2024-07-25 14:04:07.023507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.333 [2024-07-25 14:04:07.023676] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.333 [2024-07-25 14:04:07.023688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.333 [2024-07-25 14:04:07.023697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.333 [2024-07-25 14:04:07.026367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.333 [2024-07-25 14:04:07.035664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.333 [2024-07-25 14:04:07.036196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.333 [2024-07-25 14:04:07.036215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.333 [2024-07-25 14:04:07.036225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.333 [2024-07-25 14:04:07.036396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.333 [2024-07-25 14:04:07.036565] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.333 [2024-07-25 14:04:07.036576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.333 [2024-07-25 14:04:07.036586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.333 [2024-07-25 14:04:07.039257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.333 [2024-07-25 14:04:07.048543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.333 [2024-07-25 14:04:07.049057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.333 [2024-07-25 14:04:07.049075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.333 [2024-07-25 14:04:07.049085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.333 [2024-07-25 14:04:07.049254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.333 [2024-07-25 14:04:07.049424] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.333 [2024-07-25 14:04:07.049435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.333 [2024-07-25 14:04:07.049448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.333 [2024-07-25 14:04:07.052118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.333 [2024-07-25 14:04:07.061447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.333 [2024-07-25 14:04:07.061944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.333 [2024-07-25 14:04:07.061961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.333 [2024-07-25 14:04:07.061971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.333 [2024-07-25 14:04:07.062127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.333 [2024-07-25 14:04:07.062284] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.333 [2024-07-25 14:04:07.062294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.333 [2024-07-25 14:04:07.062303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.333 [2024-07-25 14:04:07.064852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.333 [2024-07-25 14:04:07.074127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.333 [2024-07-25 14:04:07.074589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.333 [2024-07-25 14:04:07.074640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.333 [2024-07-25 14:04:07.074673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.333 [2024-07-25 14:04:07.075282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.333 [2024-07-25 14:04:07.075900] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.333 [2024-07-25 14:04:07.075936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.333 [2024-07-25 14:04:07.075967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.333 [2024-07-25 14:04:07.078438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.333 [2024-07-25 14:04:07.086765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.333 [2024-07-25 14:04:07.087274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.333 [2024-07-25 14:04:07.087291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.333 [2024-07-25 14:04:07.087300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.333 [2024-07-25 14:04:07.087457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.333 [2024-07-25 14:04:07.087614] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.333 [2024-07-25 14:04:07.087624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.333 [2024-07-25 14:04:07.087632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.333 [2024-07-25 14:04:07.090177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.333 [2024-07-25 14:04:07.099538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.333 [2024-07-25 14:04:07.100063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.333 [2024-07-25 14:04:07.100123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.333 [2024-07-25 14:04:07.100155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.333 [2024-07-25 14:04:07.100673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.333 [2024-07-25 14:04:07.100859] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.333 [2024-07-25 14:04:07.100871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.333 [2024-07-25 14:04:07.100880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.333 [2024-07-25 14:04:07.103401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.333 [2024-07-25 14:04:07.112270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.333 [2024-07-25 14:04:07.112776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.333 [2024-07-25 14:04:07.112794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.333 [2024-07-25 14:04:07.112804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.333 [2024-07-25 14:04:07.112962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.333 [2024-07-25 14:04:07.113118] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.334 [2024-07-25 14:04:07.113129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.334 [2024-07-25 14:04:07.113138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.334 [2024-07-25 14:04:07.115680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.334 [2024-07-25 14:04:07.125042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.334 [2024-07-25 14:04:07.125324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.334 [2024-07-25 14:04:07.125341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.334 [2024-07-25 14:04:07.125350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.334 [2024-07-25 14:04:07.125506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.334 [2024-07-25 14:04:07.125663] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.334 [2024-07-25 14:04:07.125673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.334 [2024-07-25 14:04:07.125682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.334 [2024-07-25 14:04:07.128226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.334 [2024-07-25 14:04:07.137783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.334 [2024-07-25 14:04:07.138291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.334 [2024-07-25 14:04:07.138342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.334 [2024-07-25 14:04:07.138375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.334 [2024-07-25 14:04:07.138867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.334 [2024-07-25 14:04:07.139037] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.334 [2024-07-25 14:04:07.139048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.334 [2024-07-25 14:04:07.139057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.334 [2024-07-25 14:04:07.141565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.334 [2024-07-25 14:04:07.150477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.334 [2024-07-25 14:04:07.150993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.334 [2024-07-25 14:04:07.151044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.334 [2024-07-25 14:04:07.151077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.334 [2024-07-25 14:04:07.151367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.334 [2024-07-25 14:04:07.151525] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.334 [2024-07-25 14:04:07.151535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.334 [2024-07-25 14:04:07.151544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.334 [2024-07-25 14:04:07.154089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.334 [2024-07-25 14:04:07.163379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.334 [2024-07-25 14:04:07.163822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.334 [2024-07-25 14:04:07.163874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.334 [2024-07-25 14:04:07.163906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.334 [2024-07-25 14:04:07.164495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.334 [2024-07-25 14:04:07.165102] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.334 [2024-07-25 14:04:07.165137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.334 [2024-07-25 14:04:07.165165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.334 [2024-07-25 14:04:07.167838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.334 [2024-07-25 14:04:07.176347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.334 [2024-07-25 14:04:07.176762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.334 [2024-07-25 14:04:07.176781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.334 [2024-07-25 14:04:07.176791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.334 [2024-07-25 14:04:07.176961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.334 [2024-07-25 14:04:07.177131] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.334 [2024-07-25 14:04:07.177143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.334 [2024-07-25 14:04:07.177153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.334 [2024-07-25 14:04:07.179827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.334 [2024-07-25 14:04:07.189280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.334 [2024-07-25 14:04:07.189659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.334 [2024-07-25 14:04:07.189711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.334 [2024-07-25 14:04:07.189758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.334 [2024-07-25 14:04:07.190181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.334 [2024-07-25 14:04:07.190351] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.334 [2024-07-25 14:04:07.190362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.334 [2024-07-25 14:04:07.190372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.334 [2024-07-25 14:04:07.192937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.334 [2024-07-25 14:04:07.201995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.334 [2024-07-25 14:04:07.202431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.334 [2024-07-25 14:04:07.202450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.334 [2024-07-25 14:04:07.202460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.334 [2024-07-25 14:04:07.202626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.334 [2024-07-25 14:04:07.202819] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.334 [2024-07-25 14:04:07.202831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.334 [2024-07-25 14:04:07.202841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.334 [2024-07-25 14:04:07.205509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.334 [2024-07-25 14:04:07.214910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.334 [2024-07-25 14:04:07.215435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.334 [2024-07-25 14:04:07.215453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.334 [2024-07-25 14:04:07.215463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.334 [2024-07-25 14:04:07.215633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.334 [2024-07-25 14:04:07.215809] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.334 [2024-07-25 14:04:07.215820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.334 [2024-07-25 14:04:07.215829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.334 [2024-07-25 14:04:07.218433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.596 [2024-07-25 14:04:07.227698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.596 [2024-07-25 14:04:07.228235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.596 [2024-07-25 14:04:07.228286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.596 [2024-07-25 14:04:07.228333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.596 [2024-07-25 14:04:07.228939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.596 [2024-07-25 14:04:07.229137] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.596 [2024-07-25 14:04:07.229149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.596 [2024-07-25 14:04:07.229158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.596 [2024-07-25 14:04:07.231746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.596 [2024-07-25 14:04:07.240441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.596 [2024-07-25 14:04:07.240906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.596 [2024-07-25 14:04:07.240924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.596 [2024-07-25 14:04:07.240933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.596 [2024-07-25 14:04:07.241089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.596 [2024-07-25 14:04:07.241246] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.596 [2024-07-25 14:04:07.241256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.596 [2024-07-25 14:04:07.241265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.596 [2024-07-25 14:04:07.243810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.596 [2024-07-25 14:04:07.253199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.596 [2024-07-25 14:04:07.253741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.596 [2024-07-25 14:04:07.253793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.596 [2024-07-25 14:04:07.253825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.596 [2024-07-25 14:04:07.254288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.596 [2024-07-25 14:04:07.254445] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.596 [2024-07-25 14:04:07.254456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.596 [2024-07-25 14:04:07.254464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.596 [2024-07-25 14:04:07.258069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.596 [2024-07-25 14:04:07.266462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.596 [2024-07-25 14:04:07.266970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.596 [2024-07-25 14:04:07.267022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.596 [2024-07-25 14:04:07.267055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.596 [2024-07-25 14:04:07.267397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.596 [2024-07-25 14:04:07.267554] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.596 [2024-07-25 14:04:07.267568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.596 [2024-07-25 14:04:07.267576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.596 [2024-07-25 14:04:07.270118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.596 [2024-07-25 14:04:07.279189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.596 [2024-07-25 14:04:07.279709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.596 [2024-07-25 14:04:07.279772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.596 [2024-07-25 14:04:07.279804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.596 [2024-07-25 14:04:07.280230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.596 [2024-07-25 14:04:07.280387] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.596 [2024-07-25 14:04:07.280398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.596 [2024-07-25 14:04:07.280407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.596 [2024-07-25 14:04:07.282949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.596 [2024-07-25 14:04:07.291933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.596 [2024-07-25 14:04:07.292435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.596 [2024-07-25 14:04:07.292487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.596 [2024-07-25 14:04:07.292520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.596 [2024-07-25 14:04:07.293127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.596 [2024-07-25 14:04:07.293339] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.596 [2024-07-25 14:04:07.293351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.596 [2024-07-25 14:04:07.293360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.596 [2024-07-25 14:04:07.295857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.596 [2024-07-25 14:04:07.304712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.596 [2024-07-25 14:04:07.305181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.596 [2024-07-25 14:04:07.305199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.596 [2024-07-25 14:04:07.305208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.596 [2024-07-25 14:04:07.305365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.597 [2024-07-25 14:04:07.305523] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.597 [2024-07-25 14:04:07.305534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.597 [2024-07-25 14:04:07.305542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.597 [2024-07-25 14:04:07.308088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.597 [2024-07-25 14:04:07.317542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.597 [2024-07-25 14:04:07.317887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.597 [2024-07-25 14:04:07.317921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.597 [2024-07-25 14:04:07.317931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.597 [2024-07-25 14:04:07.318097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.597 [2024-07-25 14:04:07.318262] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.597 [2024-07-25 14:04:07.318273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.597 [2024-07-25 14:04:07.318282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.597 [2024-07-25 14:04:07.320831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.597 [2024-07-25 14:04:07.330283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.597 [2024-07-25 14:04:07.330788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.597 [2024-07-25 14:04:07.330839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.597 [2024-07-25 14:04:07.330872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.597 [2024-07-25 14:04:07.331292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.597 [2024-07-25 14:04:07.331449] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.597 [2024-07-25 14:04:07.331460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.597 [2024-07-25 14:04:07.331469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.597 [2024-07-25 14:04:07.333965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.597 [2024-07-25 14:04:07.343006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.597 [2024-07-25 14:04:07.343475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.597 [2024-07-25 14:04:07.343525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.597 [2024-07-25 14:04:07.343558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.597 [2024-07-25 14:04:07.344168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.597 [2024-07-25 14:04:07.344683] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.597 [2024-07-25 14:04:07.344694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.597 [2024-07-25 14:04:07.344703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.597 [2024-07-25 14:04:07.347172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.597 [2024-07-25 14:04:07.355789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.597 [2024-07-25 14:04:07.356299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.597 [2024-07-25 14:04:07.356354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.597 [2024-07-25 14:04:07.356388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.597 [2024-07-25 14:04:07.357004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.597 [2024-07-25 14:04:07.357516] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.597 [2024-07-25 14:04:07.357527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.597 [2024-07-25 14:04:07.357536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.597 [2024-07-25 14:04:07.360077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.597 [2024-07-25 14:04:07.368571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.597 [2024-07-25 14:04:07.369096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.597 [2024-07-25 14:04:07.369148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.597 [2024-07-25 14:04:07.369180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.597 [2024-07-25 14:04:07.369616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.597 [2024-07-25 14:04:07.369797] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.597 [2024-07-25 14:04:07.369809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.597 [2024-07-25 14:04:07.369818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.597 [2024-07-25 14:04:07.372334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.597 [2024-07-25 14:04:07.381263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.597 [2024-07-25 14:04:07.381607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.597 [2024-07-25 14:04:07.381658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.597 [2024-07-25 14:04:07.381690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.597 [2024-07-25 14:04:07.382293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.597 [2024-07-25 14:04:07.382805] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.597 [2024-07-25 14:04:07.382817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.597 [2024-07-25 14:04:07.382826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.597 [2024-07-25 14:04:07.385295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.597 [2024-07-25 14:04:07.393987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.597 [2024-07-25 14:04:07.394490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.597 [2024-07-25 14:04:07.394540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.597 [2024-07-25 14:04:07.394573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.597 [2024-07-25 14:04:07.395178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.597 [2024-07-25 14:04:07.395620] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.597 [2024-07-25 14:04:07.395631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.598 [2024-07-25 14:04:07.395642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.598 [2024-07-25 14:04:07.398103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.598 [2024-07-25 14:04:07.406662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.598 [2024-07-25 14:04:07.407086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.598 [2024-07-25 14:04:07.407104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.598 [2024-07-25 14:04:07.407114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.598 [2024-07-25 14:04:07.407269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.598 [2024-07-25 14:04:07.407426] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.598 [2024-07-25 14:04:07.407436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.598 [2024-07-25 14:04:07.407444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.598 [2024-07-25 14:04:07.409903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.598 [2024-07-25 14:04:07.419318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.598 [2024-07-25 14:04:07.419840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.598 [2024-07-25 14:04:07.419891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.598 [2024-07-25 14:04:07.419923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.598 [2024-07-25 14:04:07.420512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.598 [2024-07-25 14:04:07.420822] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.598 [2024-07-25 14:04:07.420834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.598 [2024-07-25 14:04:07.420843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.598 [2024-07-25 14:04:07.423358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.598 [2024-07-25 14:04:07.432038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.598 [2024-07-25 14:04:07.432566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.598 [2024-07-25 14:04:07.432616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.598 [2024-07-25 14:04:07.432648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.598 [2024-07-25 14:04:07.433045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.598 [2024-07-25 14:04:07.433203] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.598 [2024-07-25 14:04:07.433214] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.598 [2024-07-25 14:04:07.433222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.598 [2024-07-25 14:04:07.435673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.598 [2024-07-25 14:04:07.444825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.598 [2024-07-25 14:04:07.445248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.598 [2024-07-25 14:04:07.445264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.598 [2024-07-25 14:04:07.445273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.598 [2024-07-25 14:04:07.445429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.598 [2024-07-25 14:04:07.445585] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.598 [2024-07-25 14:04:07.445595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.598 [2024-07-25 14:04:07.445604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.598 [2024-07-25 14:04:07.448196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.598 [2024-07-25 14:04:07.457605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.598 [2024-07-25 14:04:07.458114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.598 [2024-07-25 14:04:07.458133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.598 [2024-07-25 14:04:07.458143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.598 [2024-07-25 14:04:07.458308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.598 [2024-07-25 14:04:07.458472] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.598 [2024-07-25 14:04:07.458484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.598 [2024-07-25 14:04:07.458493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.598 [2024-07-25 14:04:07.461174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.598 [2024-07-25 14:04:07.470540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.598 [2024-07-25 14:04:07.471007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.598 [2024-07-25 14:04:07.471062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.598 [2024-07-25 14:04:07.471095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.598 [2024-07-25 14:04:07.471585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.598 [2024-07-25 14:04:07.471756] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.598 [2024-07-25 14:04:07.471768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.598 [2024-07-25 14:04:07.471778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.598 [2024-07-25 14:04:07.474371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.860 [2024-07-25 14:04:07.483459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.860 [2024-07-25 14:04:07.483996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.860 [2024-07-25 14:04:07.484048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.860 [2024-07-25 14:04:07.484081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.860 [2024-07-25 14:04:07.484677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.860 [2024-07-25 14:04:07.484925] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.860 [2024-07-25 14:04:07.484936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.860 [2024-07-25 14:04:07.484945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.860 [2024-07-25 14:04:07.487541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.860 [2024-07-25 14:04:07.496198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.860 [2024-07-25 14:04:07.496734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.860 [2024-07-25 14:04:07.496786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.860 [2024-07-25 14:04:07.496818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.860 [2024-07-25 14:04:07.497163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.860 [2024-07-25 14:04:07.497320] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.860 [2024-07-25 14:04:07.497331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.860 [2024-07-25 14:04:07.497340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.860 [2024-07-25 14:04:07.499864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.860 [2024-07-25 14:04:07.509154] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.860 [2024-07-25 14:04:07.509677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.860 [2024-07-25 14:04:07.509745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.860 [2024-07-25 14:04:07.509780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.860 [2024-07-25 14:04:07.510130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.860 [2024-07-25 14:04:07.510288] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.860 [2024-07-25 14:04:07.510299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.860 [2024-07-25 14:04:07.510308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.860 [2024-07-25 14:04:07.512767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.860 [2024-07-25 14:04:07.521881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.860 [2024-07-25 14:04:07.522395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.860 [2024-07-25 14:04:07.522412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.860 [2024-07-25 14:04:07.522421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.860 [2024-07-25 14:04:07.522578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.860 [2024-07-25 14:04:07.522741] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.860 [2024-07-25 14:04:07.522751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.860 [2024-07-25 14:04:07.522763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.860 [2024-07-25 14:04:07.525294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.860 [2024-07-25 14:04:07.534566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.860 [2024-07-25 14:04:07.535022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.860 [2024-07-25 14:04:07.535073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.860 [2024-07-25 14:04:07.535105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.860 [2024-07-25 14:04:07.535693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.860 [2024-07-25 14:04:07.536201] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.860 [2024-07-25 14:04:07.536212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.860 [2024-07-25 14:04:07.536221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.860 [2024-07-25 14:04:07.538674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.860 [2024-07-25 14:04:07.547225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.860 [2024-07-25 14:04:07.547746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.860 [2024-07-25 14:04:07.547799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.860 [2024-07-25 14:04:07.547831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.860 [2024-07-25 14:04:07.548297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.860 [2024-07-25 14:04:07.548455] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.860 [2024-07-25 14:04:07.548466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.860 [2024-07-25 14:04:07.548475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.860 [2024-07-25 14:04:07.551017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.860 [2024-07-25 14:04:07.560008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.860 [2024-07-25 14:04:07.560478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.860 [2024-07-25 14:04:07.560496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.860 [2024-07-25 14:04:07.560505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.860 [2024-07-25 14:04:07.560661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.860 [2024-07-25 14:04:07.560845] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.861 [2024-07-25 14:04:07.560857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.861 [2024-07-25 14:04:07.560866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.861 [2024-07-25 14:04:07.563380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.861 [2024-07-25 14:04:07.572744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.861 [2024-07-25 14:04:07.573263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.861 [2024-07-25 14:04:07.573322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.861 [2024-07-25 14:04:07.573355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.861 [2024-07-25 14:04:07.573959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.861 [2024-07-25 14:04:07.574187] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.861 [2024-07-25 14:04:07.574198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.861 [2024-07-25 14:04:07.574206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.861 [2024-07-25 14:04:07.576666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.861 [2024-07-25 14:04:07.585505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.861 [2024-07-25 14:04:07.585989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.861 [2024-07-25 14:04:07.586007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.861 [2024-07-25 14:04:07.586016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.861 [2024-07-25 14:04:07.586174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.861 [2024-07-25 14:04:07.586331] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.861 [2024-07-25 14:04:07.586342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.861 [2024-07-25 14:04:07.586350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.861 [2024-07-25 14:04:07.588804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.861 [2024-07-25 14:04:07.598221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.861 [2024-07-25 14:04:07.598766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.861 [2024-07-25 14:04:07.598818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.861 [2024-07-25 14:04:07.598849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.861 [2024-07-25 14:04:07.599253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.861 [2024-07-25 14:04:07.599411] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.861 [2024-07-25 14:04:07.599422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.861 [2024-07-25 14:04:07.599430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.861 [2024-07-25 14:04:07.601891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.861 [2024-07-25 14:04:07.610881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.861 [2024-07-25 14:04:07.611396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.861 [2024-07-25 14:04:07.611412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.861 [2024-07-25 14:04:07.611421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.861 [2024-07-25 14:04:07.611578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.861 [2024-07-25 14:04:07.611744] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.861 [2024-07-25 14:04:07.611755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.861 [2024-07-25 14:04:07.611763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.861 [2024-07-25 14:04:07.614301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.861 [2024-07-25 14:04:07.623576] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.861 [2024-07-25 14:04:07.624006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.861 [2024-07-25 14:04:07.624058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.861 [2024-07-25 14:04:07.624090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.861 [2024-07-25 14:04:07.624682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.861 [2024-07-25 14:04:07.625222] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.861 [2024-07-25 14:04:07.625233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.861 [2024-07-25 14:04:07.625242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.861 [2024-07-25 14:04:07.627696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.861 [2024-07-25 14:04:07.636241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.861 [2024-07-25 14:04:07.636698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.861 [2024-07-25 14:04:07.636762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.861 [2024-07-25 14:04:07.636795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.861 [2024-07-25 14:04:07.637353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.861 [2024-07-25 14:04:07.637510] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.861 [2024-07-25 14:04:07.637520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.861 [2024-07-25 14:04:07.637528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.861 [2024-07-25 14:04:07.640062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.861 [2024-07-25 14:04:07.648900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.861 [2024-07-25 14:04:07.649413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.862 [2024-07-25 14:04:07.649449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.862 [2024-07-25 14:04:07.649482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.862 [2024-07-25 14:04:07.650051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.862 [2024-07-25 14:04:07.650209] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.862 [2024-07-25 14:04:07.650220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.862 [2024-07-25 14:04:07.650228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.862 [2024-07-25 14:04:07.652688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.862 [2024-07-25 14:04:07.661589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.862 [2024-07-25 14:04:07.662036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.862 [2024-07-25 14:04:07.662054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.862 [2024-07-25 14:04:07.662063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.862 [2024-07-25 14:04:07.662220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.862 [2024-07-25 14:04:07.662377] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.862 [2024-07-25 14:04:07.662388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.862 [2024-07-25 14:04:07.662396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.862 [2024-07-25 14:04:07.664857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.862 [2024-07-25 14:04:07.674269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.862 [2024-07-25 14:04:07.674772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.862 [2024-07-25 14:04:07.674823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.862 [2024-07-25 14:04:07.674856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.862 [2024-07-25 14:04:07.675304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.862 [2024-07-25 14:04:07.675461] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.862 [2024-07-25 14:04:07.675472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.862 [2024-07-25 14:04:07.675482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.862 [2024-07-25 14:04:07.678030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.862 [2024-07-25 14:04:07.686994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.862 [2024-07-25 14:04:07.687521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.862 [2024-07-25 14:04:07.687573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.862 [2024-07-25 14:04:07.687605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.862 [2024-07-25 14:04:07.688099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.862 [2024-07-25 14:04:07.688258] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.862 [2024-07-25 14:04:07.688268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.862 [2024-07-25 14:04:07.688278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.862 [2024-07-25 14:04:07.690737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.862 [2024-07-25 14:04:07.699720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.862 [2024-07-25 14:04:07.700242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.862 [2024-07-25 14:04:07.700293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.862 [2024-07-25 14:04:07.700332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.862 [2024-07-25 14:04:07.700940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.862 [2024-07-25 14:04:07.701388] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.862 [2024-07-25 14:04:07.701399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.862 [2024-07-25 14:04:07.701408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.862 [2024-07-25 14:04:07.703864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.862 [2024-07-25 14:04:07.712403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.862 [2024-07-25 14:04:07.712894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.862 [2024-07-25 14:04:07.712913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.862 [2024-07-25 14:04:07.712922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.862 [2024-07-25 14:04:07.713087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.862 [2024-07-25 14:04:07.713253] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.862 [2024-07-25 14:04:07.713264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.862 [2024-07-25 14:04:07.713272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.862 [2024-07-25 14:04:07.715939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.862 [2024-07-25 14:04:07.725324] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.862 [2024-07-25 14:04:07.725847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.862 [2024-07-25 14:04:07.725865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.862 [2024-07-25 14:04:07.725874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.862 [2024-07-25 14:04:07.726040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.862 [2024-07-25 14:04:07.726205] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.863 [2024-07-25 14:04:07.726215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.863 [2024-07-25 14:04:07.726224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.863 [2024-07-25 14:04:07.728813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.863 [2024-07-25 14:04:07.738066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:10.863 [2024-07-25 14:04:07.738585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.863 [2024-07-25 14:04:07.738634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:10.863 [2024-07-25 14:04:07.738667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:10.863 [2024-07-25 14:04:07.739206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:10.863 [2024-07-25 14:04:07.739365] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.863 [2024-07-25 14:04:07.739378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.863 [2024-07-25 14:04:07.739387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.863 [2024-07-25 14:04:07.741982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.124 [2024-07-25 14:04:07.750755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.124 [2024-07-25 14:04:07.751276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.124 [2024-07-25 14:04:07.751328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.124 [2024-07-25 14:04:07.751359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.124 [2024-07-25 14:04:07.751967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.124 [2024-07-25 14:04:07.752542] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.124 [2024-07-25 14:04:07.752553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.124 [2024-07-25 14:04:07.752563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.124 [2024-07-25 14:04:07.755159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.124 [2024-07-25 14:04:07.763418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.124 [2024-07-25 14:04:07.763952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.124 [2024-07-25 14:04:07.764005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.124 [2024-07-25 14:04:07.764036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.124 [2024-07-25 14:04:07.764551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.124 [2024-07-25 14:04:07.764709] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.124 [2024-07-25 14:04:07.764724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.124 [2024-07-25 14:04:07.764733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.124 [2024-07-25 14:04:07.767185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.124 [2024-07-25 14:04:07.776254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.124 [2024-07-25 14:04:07.776758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.124 [2024-07-25 14:04:07.776810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.124 [2024-07-25 14:04:07.776842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.124 [2024-07-25 14:04:07.777430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.124 [2024-07-25 14:04:07.777688] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.124 [2024-07-25 14:04:07.777699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.124 [2024-07-25 14:04:07.777709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.124 [2024-07-25 14:04:07.780252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.124 [2024-07-25 14:04:07.788930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.124 [2024-07-25 14:04:07.789440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.124 [2024-07-25 14:04:07.789457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.124 [2024-07-25 14:04:07.789466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.124 [2024-07-25 14:04:07.789622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.124 [2024-07-25 14:04:07.789805] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.124 [2024-07-25 14:04:07.789816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.124 [2024-07-25 14:04:07.789825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.124 [2024-07-25 14:04:07.792343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.124 [2024-07-25 14:04:07.801696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.124 [2024-07-25 14:04:07.802203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.124 [2024-07-25 14:04:07.802220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.124 [2024-07-25 14:04:07.802229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.124 [2024-07-25 14:04:07.802385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.124 [2024-07-25 14:04:07.802541] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.124 [2024-07-25 14:04:07.802550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.124 [2024-07-25 14:04:07.802559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.124 [2024-07-25 14:04:07.805109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.124 [2024-07-25 14:04:07.814378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.124 [2024-07-25 14:04:07.814888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.124 [2024-07-25 14:04:07.814905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.124 [2024-07-25 14:04:07.814913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.125 [2024-07-25 14:04:07.815069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.125 [2024-07-25 14:04:07.815226] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.125 [2024-07-25 14:04:07.815236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.125 [2024-07-25 14:04:07.815244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.125 [2024-07-25 14:04:07.817702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.125 [2024-07-25 14:04:07.827112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.125 [2024-07-25 14:04:07.827620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.125 [2024-07-25 14:04:07.827637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.125 [2024-07-25 14:04:07.827646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.125 [2024-07-25 14:04:07.827811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.125 [2024-07-25 14:04:07.827969] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.125 [2024-07-25 14:04:07.827979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.125 [2024-07-25 14:04:07.827987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.125 [2024-07-25 14:04:07.830526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.125 [2024-07-25 14:04:07.839940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.125 [2024-07-25 14:04:07.840462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.125 [2024-07-25 14:04:07.840513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.125 [2024-07-25 14:04:07.840545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.125 [2024-07-25 14:04:07.840893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.125 [2024-07-25 14:04:07.841133] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.125 [2024-07-25 14:04:07.841148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.125 [2024-07-25 14:04:07.841161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.125 [2024-07-25 14:04:07.844895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.125 [2024-07-25 14:04:07.853037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.125 [2024-07-25 14:04:07.853545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.125 [2024-07-25 14:04:07.853562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.125 [2024-07-25 14:04:07.853571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.125 [2024-07-25 14:04:07.853734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.125 [2024-07-25 14:04:07.853915] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.125 [2024-07-25 14:04:07.853925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.125 [2024-07-25 14:04:07.853934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.125 [2024-07-25 14:04:07.856448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.125 [2024-07-25 14:04:07.865801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.125 [2024-07-25 14:04:07.866307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.125 [2024-07-25 14:04:07.866358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.125 [2024-07-25 14:04:07.866389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.125 [2024-07-25 14:04:07.866822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.125 [2024-07-25 14:04:07.866980] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.125 [2024-07-25 14:04:07.866991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.125 [2024-07-25 14:04:07.867006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.125 [2024-07-25 14:04:07.869464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.125 [2024-07-25 14:04:07.878591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.125 [2024-07-25 14:04:07.879099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.125 [2024-07-25 14:04:07.879151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.125 [2024-07-25 14:04:07.879183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.125 [2024-07-25 14:04:07.879639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.125 [2024-07-25 14:04:07.879822] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.125 [2024-07-25 14:04:07.879834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.125 [2024-07-25 14:04:07.879845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.125 [2024-07-25 14:04:07.882361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.125 [2024-07-25 14:04:07.891287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.125 [2024-07-25 14:04:07.891806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.125 [2024-07-25 14:04:07.891857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.125 [2024-07-25 14:04:07.891889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.125 [2024-07-25 14:04:07.892480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.125 [2024-07-25 14:04:07.892731] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.125 [2024-07-25 14:04:07.892742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.125 [2024-07-25 14:04:07.892751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.125 [2024-07-25 14:04:07.895447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.125 [2024-07-25 14:04:07.904000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.125 [2024-07-25 14:04:07.904510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.125 [2024-07-25 14:04:07.904528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.125 [2024-07-25 14:04:07.904537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.125 [2024-07-25 14:04:07.904692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.125 [2024-07-25 14:04:07.904877] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.125 [2024-07-25 14:04:07.904888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.125 [2024-07-25 14:04:07.904896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.125 [2024-07-25 14:04:07.907415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.125 [2024-07-25 14:04:07.916686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.125 [2024-07-25 14:04:07.917143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.125 [2024-07-25 14:04:07.917195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.125 [2024-07-25 14:04:07.917228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.125 [2024-07-25 14:04:07.917760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.125 [2024-07-25 14:04:07.917918] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.125 [2024-07-25 14:04:07.917929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.125 [2024-07-25 14:04:07.917938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.125 [2024-07-25 14:04:07.920393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.125 [2024-07-25 14:04:07.929373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.125 [2024-07-25 14:04:07.929806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.125 [2024-07-25 14:04:07.929858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.125 [2024-07-25 14:04:07.929891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.125 [2024-07-25 14:04:07.930479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.125 [2024-07-25 14:04:07.930999] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.125 [2024-07-25 14:04:07.931011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.125 [2024-07-25 14:04:07.931019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.125 [2024-07-25 14:04:07.933473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.125 [2024-07-25 14:04:07.942020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.125 [2024-07-25 14:04:07.942530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.126 [2024-07-25 14:04:07.942547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.126 [2024-07-25 14:04:07.942556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.126 [2024-07-25 14:04:07.942711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.126 [2024-07-25 14:04:07.942897] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.126 [2024-07-25 14:04:07.942907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.126 [2024-07-25 14:04:07.942916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.126 [2024-07-25 14:04:07.945428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.126 [2024-07-25 14:04:07.954697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.126 [2024-07-25 14:04:07.955209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.126 [2024-07-25 14:04:07.955227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.126 [2024-07-25 14:04:07.955235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.126 [2024-07-25 14:04:07.955391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.126 [2024-07-25 14:04:07.955550] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.126 [2024-07-25 14:04:07.955560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.126 [2024-07-25 14:04:07.955569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.126 [2024-07-25 14:04:07.958106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.126 [2024-07-25 14:04:07.967379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.126 [2024-07-25 14:04:07.967878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.126 [2024-07-25 14:04:07.967896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.126 [2024-07-25 14:04:07.967906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.126 [2024-07-25 14:04:07.968071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.126 [2024-07-25 14:04:07.968236] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.126 [2024-07-25 14:04:07.968247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.126 [2024-07-25 14:04:07.968255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.126 [2024-07-25 14:04:07.970924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.126 [2024-07-25 14:04:07.980300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.126 [2024-07-25 14:04:07.980823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.126 [2024-07-25 14:04:07.980875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.126 [2024-07-25 14:04:07.980908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.126 [2024-07-25 14:04:07.981364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.126 [2024-07-25 14:04:07.981530] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.126 [2024-07-25 14:04:07.981541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.126 [2024-07-25 14:04:07.981550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.126 [2024-07-25 14:04:07.984197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.126 [2024-07-25 14:04:07.993204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.126 [2024-07-25 14:04:07.993740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.126 [2024-07-25 14:04:07.993790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.126 [2024-07-25 14:04:07.993823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.126 [2024-07-25 14:04:07.994181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.126 [2024-07-25 14:04:07.994346] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.126 [2024-07-25 14:04:07.994358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.126 [2024-07-25 14:04:07.994367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.126 [2024-07-25 14:04:07.996934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.126 [2024-07-25 14:04:08.006033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.126 [2024-07-25 14:04:08.006598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.126 [2024-07-25 14:04:08.006649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.126 [2024-07-25 14:04:08.006682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.126 [2024-07-25 14:04:08.007198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.126 [2024-07-25 14:04:08.007365] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.126 [2024-07-25 14:04:08.007376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.126 [2024-07-25 14:04:08.007386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.126 [2024-07-25 14:04:08.009985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.387 [2024-07-25 14:04:08.018815] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.387 [2024-07-25 14:04:08.019279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-07-25 14:04:08.019330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.387 [2024-07-25 14:04:08.019363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.387 [2024-07-25 14:04:08.019922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.387 [2024-07-25 14:04:08.020081] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.387 [2024-07-25 14:04:08.020092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.387 [2024-07-25 14:04:08.020101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.387 [2024-07-25 14:04:08.022559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.387 [2024-07-25 14:04:08.031663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.387 [2024-07-25 14:04:08.032121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-07-25 14:04:08.032141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.387 [2024-07-25 14:04:08.032150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.387 [2024-07-25 14:04:08.032321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.387 [2024-07-25 14:04:08.032490] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.387 [2024-07-25 14:04:08.032502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.387 [2024-07-25 14:04:08.032511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.387 [2024-07-25 14:04:08.035182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.387 [2024-07-25 14:04:08.044615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.387 [2024-07-25 14:04:08.045143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-07-25 14:04:08.045165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.387 [2024-07-25 14:04:08.045176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.387 [2024-07-25 14:04:08.045345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.387 [2024-07-25 14:04:08.045515] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.387 [2024-07-25 14:04:08.045527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.387 [2024-07-25 14:04:08.045536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.387 [2024-07-25 14:04:08.048203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.387 [2024-07-25 14:04:08.057479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.387 [2024-07-25 14:04:08.058009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-07-25 14:04:08.058028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.387 [2024-07-25 14:04:08.058037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.387 [2024-07-25 14:04:08.058207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.387 [2024-07-25 14:04:08.058378] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.387 [2024-07-25 14:04:08.058389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.387 [2024-07-25 14:04:08.058398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.387 [2024-07-25 14:04:08.061063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.387 [2024-07-25 14:04:08.070352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.387 [2024-07-25 14:04:08.070855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-07-25 14:04:08.070873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.387 [2024-07-25 14:04:08.070883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.387 [2024-07-25 14:04:08.071053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.387 [2024-07-25 14:04:08.071223] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.387 [2024-07-25 14:04:08.071235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.387 [2024-07-25 14:04:08.071244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.387 [2024-07-25 14:04:08.073908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.387 [2024-07-25 14:04:08.083363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.387 [2024-07-25 14:04:08.083865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.387 [2024-07-25 14:04:08.083884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.387 [2024-07-25 14:04:08.083895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.387 [2024-07-25 14:04:08.084064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.387 [2024-07-25 14:04:08.084237] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.387 [2024-07-25 14:04:08.084249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.387 [2024-07-25 14:04:08.084258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.387 [2024-07-25 14:04:08.086931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.388 [2024-07-25 14:04:08.096370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.388 [2024-07-25 14:04:08.096870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-07-25 14:04:08.096889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.388 [2024-07-25 14:04:08.096899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.388 [2024-07-25 14:04:08.097069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.388 [2024-07-25 14:04:08.097238] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.388 [2024-07-25 14:04:08.097249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.388 [2024-07-25 14:04:08.097258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.388 [2024-07-25 14:04:08.099934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.388 [2024-07-25 14:04:08.109381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.388 [2024-07-25 14:04:08.109900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-07-25 14:04:08.109919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.388 [2024-07-25 14:04:08.109929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.388 [2024-07-25 14:04:08.110098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.388 [2024-07-25 14:04:08.110269] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.388 [2024-07-25 14:04:08.110279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.388 [2024-07-25 14:04:08.110288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.388 [2024-07-25 14:04:08.112979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.388 [2024-07-25 14:04:08.122284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.388 [2024-07-25 14:04:08.122809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-07-25 14:04:08.122827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.388 [2024-07-25 14:04:08.122837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.388 [2024-07-25 14:04:08.123006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.388 [2024-07-25 14:04:08.123175] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.388 [2024-07-25 14:04:08.123186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.388 [2024-07-25 14:04:08.123196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.388 [2024-07-25 14:04:08.125859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.388 [2024-07-25 14:04:08.135149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.388 [2024-07-25 14:04:08.135652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-07-25 14:04:08.135670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.388 [2024-07-25 14:04:08.135679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.388 [2024-07-25 14:04:08.135855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.388 [2024-07-25 14:04:08.136026] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.388 [2024-07-25 14:04:08.136036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.388 [2024-07-25 14:04:08.136046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.388 [2024-07-25 14:04:08.138711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.388 [2024-07-25 14:04:08.148160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.388 [2024-07-25 14:04:08.148687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-07-25 14:04:08.148705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.388 [2024-07-25 14:04:08.148719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.388 [2024-07-25 14:04:08.148889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.388 [2024-07-25 14:04:08.149060] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.388 [2024-07-25 14:04:08.149070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.388 [2024-07-25 14:04:08.149080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.388 [2024-07-25 14:04:08.151742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.388 [2024-07-25 14:04:08.161169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.388 [2024-07-25 14:04:08.161696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-07-25 14:04:08.161719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.388 [2024-07-25 14:04:08.161729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.388 [2024-07-25 14:04:08.161898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.388 [2024-07-25 14:04:08.162068] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.388 [2024-07-25 14:04:08.162078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.388 [2024-07-25 14:04:08.162088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.388 [2024-07-25 14:04:08.164748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.388 [2024-07-25 14:04:08.174038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.388 [2024-07-25 14:04:08.174544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-07-25 14:04:08.174563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.388 [2024-07-25 14:04:08.174576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.388 [2024-07-25 14:04:08.174753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.388 [2024-07-25 14:04:08.174924] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.388 [2024-07-25 14:04:08.174935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.388 [2024-07-25 14:04:08.174944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.388 [2024-07-25 14:04:08.177619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.388 [2024-07-25 14:04:08.186913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.388 [2024-07-25 14:04:08.187366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-07-25 14:04:08.187385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.388 [2024-07-25 14:04:08.187394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.388 [2024-07-25 14:04:08.187565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.388 [2024-07-25 14:04:08.187740] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.388 [2024-07-25 14:04:08.187753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.388 [2024-07-25 14:04:08.187762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.388 [2024-07-25 14:04:08.190430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.388 [2024-07-25 14:04:08.199861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.388 [2024-07-25 14:04:08.200385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-07-25 14:04:08.200403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.388 [2024-07-25 14:04:08.200413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.388 [2024-07-25 14:04:08.200583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.388 [2024-07-25 14:04:08.200758] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.388 [2024-07-25 14:04:08.200769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.388 [2024-07-25 14:04:08.200778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.388 [2024-07-25 14:04:08.203436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.388 [2024-07-25 14:04:08.212735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.388 [2024-07-25 14:04:08.213184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.388 [2024-07-25 14:04:08.213203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.388 [2024-07-25 14:04:08.213213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.388 [2024-07-25 14:04:08.213384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.388 [2024-07-25 14:04:08.213554] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.388 [2024-07-25 14:04:08.213568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.388 [2024-07-25 14:04:08.213578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.389 [2024-07-25 14:04:08.216251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.389 [2024-07-25 14:04:08.225685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.389 [2024-07-25 14:04:08.226206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-07-25 14:04:08.226225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.389 [2024-07-25 14:04:08.226235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.389 [2024-07-25 14:04:08.226405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.389 [2024-07-25 14:04:08.226575] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.389 [2024-07-25 14:04:08.226587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.389 [2024-07-25 14:04:08.226596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.389 [2024-07-25 14:04:08.229269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.389 [2024-07-25 14:04:08.238566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.389 [2024-07-25 14:04:08.239099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-07-25 14:04:08.239152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.389 [2024-07-25 14:04:08.239184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.389 [2024-07-25 14:04:08.239677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.389 [2024-07-25 14:04:08.239853] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.389 [2024-07-25 14:04:08.239866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.389 [2024-07-25 14:04:08.239875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.389 [2024-07-25 14:04:08.242544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.389 [2024-07-25 14:04:08.251514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.389 [2024-07-25 14:04:08.252014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-07-25 14:04:08.252034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.389 [2024-07-25 14:04:08.252043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.389 [2024-07-25 14:04:08.252213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.389 [2024-07-25 14:04:08.252383] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.389 [2024-07-25 14:04:08.252395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.389 [2024-07-25 14:04:08.252405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.389 [2024-07-25 14:04:08.255047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.389 [2024-07-25 14:04:08.264313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.389 [2024-07-25 14:04:08.264833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.389 [2024-07-25 14:04:08.264851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.389 [2024-07-25 14:04:08.264861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.389 [2024-07-25 14:04:08.265026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.389 [2024-07-25 14:04:08.265191] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.389 [2024-07-25 14:04:08.265202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.389 [2024-07-25 14:04:08.265211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.389 [2024-07-25 14:04:08.267807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.649 [2024-07-25 14:04:08.277187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.649 [2024-07-25 14:04:08.277699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.649 [2024-07-25 14:04:08.277722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.649 [2024-07-25 14:04:08.277733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.649 [2024-07-25 14:04:08.277914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.649 [2024-07-25 14:04:08.278079] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.649 [2024-07-25 14:04:08.278091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.649 [2024-07-25 14:04:08.278099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.649 [2024-07-25 14:04:08.280695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.649 [2024-07-25 14:04:08.289957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.649 [2024-07-25 14:04:08.290407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.649 [2024-07-25 14:04:08.290459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.649 [2024-07-25 14:04:08.290491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.649 [2024-07-25 14:04:08.290943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.649 [2024-07-25 14:04:08.291101] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.649 [2024-07-25 14:04:08.291112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.649 [2024-07-25 14:04:08.291121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.649 [2024-07-25 14:04:08.293576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.649 [2024-07-25 14:04:08.302794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.649 [2024-07-25 14:04:08.303234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.649 [2024-07-25 14:04:08.303284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.649 [2024-07-25 14:04:08.303316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.649 [2024-07-25 14:04:08.303817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.649 [2024-07-25 14:04:08.303975] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.649 [2024-07-25 14:04:08.303986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.649 [2024-07-25 14:04:08.303996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.649 [2024-07-25 14:04:08.306459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.649 [2024-07-25 14:04:08.315596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.649 [2024-07-25 14:04:08.315981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.649 [2024-07-25 14:04:08.315999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.649 [2024-07-25 14:04:08.316009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.649 [2024-07-25 14:04:08.316164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.649 [2024-07-25 14:04:08.316321] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.649 [2024-07-25 14:04:08.316332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.649 [2024-07-25 14:04:08.316341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.649 [2024-07-25 14:04:08.318884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.649 [2024-07-25 14:04:08.328320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.649 [2024-07-25 14:04:08.328839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.649 [2024-07-25 14:04:08.328890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.649 [2024-07-25 14:04:08.328923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.649 [2024-07-25 14:04:08.329511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.649 [2024-07-25 14:04:08.329744] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.650 [2024-07-25 14:04:08.329756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.650 [2024-07-25 14:04:08.329766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.650 [2024-07-25 14:04:08.332267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.650 [2024-07-25 14:04:08.341211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.650 [2024-07-25 14:04:08.341721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.650 [2024-07-25 14:04:08.341739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.650 [2024-07-25 14:04:08.341749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.650 [2024-07-25 14:04:08.341919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.650 [2024-07-25 14:04:08.342089] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.650 [2024-07-25 14:04:08.342101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.650 [2024-07-25 14:04:08.342113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.650 [2024-07-25 14:04:08.344779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.650 [2024-07-25 14:04:08.354225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.650 [2024-07-25 14:04:08.354779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.650 [2024-07-25 14:04:08.354830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.650 [2024-07-25 14:04:08.354863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.650 [2024-07-25 14:04:08.355451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.650 [2024-07-25 14:04:08.355762] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.650 [2024-07-25 14:04:08.355774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.650 [2024-07-25 14:04:08.355784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.650 [2024-07-25 14:04:08.358449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.650 [2024-07-25 14:04:08.367117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.650 [2024-07-25 14:04:08.367628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.650 [2024-07-25 14:04:08.367646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.650 [2024-07-25 14:04:08.367656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.650 [2024-07-25 14:04:08.367831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.650 [2024-07-25 14:04:08.368002] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.650 [2024-07-25 14:04:08.368013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.650 [2024-07-25 14:04:08.368023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.650 [2024-07-25 14:04:08.370688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.650 [2024-07-25 14:04:08.380033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.650 [2024-07-25 14:04:08.380569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.650 [2024-07-25 14:04:08.380620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.650 [2024-07-25 14:04:08.380653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.650 [2024-07-25 14:04:08.381063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.650 [2024-07-25 14:04:08.381231] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.650 [2024-07-25 14:04:08.381242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.650 [2024-07-25 14:04:08.381251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.650 [2024-07-25 14:04:08.383928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.650 [2024-07-25 14:04:08.392708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.650 [2024-07-25 14:04:08.393247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.650 [2024-07-25 14:04:08.393305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.650 [2024-07-25 14:04:08.393338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.650 [2024-07-25 14:04:08.393811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.650 [2024-07-25 14:04:08.393970] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.650 [2024-07-25 14:04:08.393981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.650 [2024-07-25 14:04:08.393989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.650 [2024-07-25 14:04:08.396446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.650 [2024-07-25 14:04:08.405383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.650 [2024-07-25 14:04:08.405879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.650 [2024-07-25 14:04:08.405930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.650 [2024-07-25 14:04:08.405962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.650 [2024-07-25 14:04:08.406415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.650 [2024-07-25 14:04:08.406574] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.650 [2024-07-25 14:04:08.406584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.650 [2024-07-25 14:04:08.406594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.650 [2024-07-25 14:04:08.409091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.650 [2024-07-25 14:04:08.418064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.650 [2024-07-25 14:04:08.418510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.650 [2024-07-25 14:04:08.418527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.650 [2024-07-25 14:04:08.418536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.650 [2024-07-25 14:04:08.418693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.650 [2024-07-25 14:04:08.418878] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.650 [2024-07-25 14:04:08.418890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.650 [2024-07-25 14:04:08.418899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.650 [2024-07-25 14:04:08.421410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.650 [2024-07-25 14:04:08.430855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.650 [2024-07-25 14:04:08.431379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.650 [2024-07-25 14:04:08.431430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.650 [2024-07-25 14:04:08.431462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.650 [2024-07-25 14:04:08.431954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.650 [2024-07-25 14:04:08.432117] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.650 [2024-07-25 14:04:08.432127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.650 [2024-07-25 14:04:08.432136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.650 [2024-07-25 14:04:08.434733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.650 [2024-07-25 14:04:08.443732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.650 [2024-07-25 14:04:08.444158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.650 [2024-07-25 14:04:08.444177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.650 [2024-07-25 14:04:08.444187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.650 [2024-07-25 14:04:08.444352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.650 [2024-07-25 14:04:08.444517] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.650 [2024-07-25 14:04:08.444527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.650 [2024-07-25 14:04:08.444536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.650 [2024-07-25 14:04:08.447137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.650 [2024-07-25 14:04:08.456592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.650 [2024-07-25 14:04:08.457037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.650 [2024-07-25 14:04:08.457090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.650 [2024-07-25 14:04:08.457122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.650 [2024-07-25 14:04:08.457711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.651 [2024-07-25 14:04:08.458302] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.651 [2024-07-25 14:04:08.458318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.651 [2024-07-25 14:04:08.458330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.651 [2024-07-25 14:04:08.462066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.651 [2024-07-25 14:04:08.470007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.651 [2024-07-25 14:04:08.470400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.651 [2024-07-25 14:04:08.470450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.651 [2024-07-25 14:04:08.470483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.651 [2024-07-25 14:04:08.471056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.651 [2024-07-25 14:04:08.471223] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.651 [2024-07-25 14:04:08.471234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.651 [2024-07-25 14:04:08.471244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.651 [2024-07-25 14:04:08.473842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.651 [2024-07-25 14:04:08.482772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.651 [2024-07-25 14:04:08.483225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.651 [2024-07-25 14:04:08.483243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.651 [2024-07-25 14:04:08.483253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.651 [2024-07-25 14:04:08.483423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.651 [2024-07-25 14:04:08.483593] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.651 [2024-07-25 14:04:08.483603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.651 [2024-07-25 14:04:08.483612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.651 [2024-07-25 14:04:08.486275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.651 [2024-07-25 14:04:08.495794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.651 [2024-07-25 14:04:08.496309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.651 [2024-07-25 14:04:08.496361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.651 [2024-07-25 14:04:08.496393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.651 [2024-07-25 14:04:08.496996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.651 [2024-07-25 14:04:08.497377] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.651 [2024-07-25 14:04:08.497388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.651 [2024-07-25 14:04:08.497398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.651 [2024-07-25 14:04:08.499992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.651 [2024-07-25 14:04:08.508912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.651 [2024-07-25 14:04:08.509367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.651 [2024-07-25 14:04:08.509421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.651 [2024-07-25 14:04:08.509455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.651 [2024-07-25 14:04:08.510060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.651 [2024-07-25 14:04:08.510553] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.651 [2024-07-25 14:04:08.510565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.651 [2024-07-25 14:04:08.510573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.651 [2024-07-25 14:04:08.513175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.651 [2024-07-25 14:04:08.521870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.651 [2024-07-25 14:04:08.522374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.651 [2024-07-25 14:04:08.522393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.651 [2024-07-25 14:04:08.522409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.651 [2024-07-25 14:04:08.522579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.651 [2024-07-25 14:04:08.522754] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.651 [2024-07-25 14:04:08.522766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.651 [2024-07-25 14:04:08.522776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.651 [2024-07-25 14:04:08.525435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.651 [2024-07-25 14:04:08.534924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.651 [2024-07-25 14:04:08.535452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.651 [2024-07-25 14:04:08.535470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.651 [2024-07-25 14:04:08.535480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.651 [2024-07-25 14:04:08.535651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.651 [2024-07-25 14:04:08.535829] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.651 [2024-07-25 14:04:08.535841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.651 [2024-07-25 14:04:08.535850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.913 [2024-07-25 14:04:08.538509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.913 [2024-07-25 14:04:08.547814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.913 [2024-07-25 14:04:08.548354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-07-25 14:04:08.548406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.913 [2024-07-25 14:04:08.548438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.913 [2024-07-25 14:04:08.548971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.913 [2024-07-25 14:04:08.549209] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.913 [2024-07-25 14:04:08.549224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.913 [2024-07-25 14:04:08.549238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.913 [2024-07-25 14:04:08.552979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.913 [2024-07-25 14:04:08.560994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.913 [2024-07-25 14:04:08.561358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-07-25 14:04:08.561376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.913 [2024-07-25 14:04:08.561386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.913 [2024-07-25 14:04:08.561552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.913 [2024-07-25 14:04:08.561724] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.913 [2024-07-25 14:04:08.561739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.913 [2024-07-25 14:04:08.561748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.913 [2024-07-25 14:04:08.564350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.913 [2024-07-25 14:04:08.573789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.913 [2024-07-25 14:04:08.574247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-07-25 14:04:08.574300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.913 [2024-07-25 14:04:08.574333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.913 [2024-07-25 14:04:08.574938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.913 [2024-07-25 14:04:08.575532] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.913 [2024-07-25 14:04:08.575573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.913 [2024-07-25 14:04:08.575582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.914 [2024-07-25 14:04:08.578131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.914 [2024-07-25 14:04:08.586555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.914 [2024-07-25 14:04:08.586913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-07-25 14:04:08.586931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.914 [2024-07-25 14:04:08.586940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.914 [2024-07-25 14:04:08.587097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.914 [2024-07-25 14:04:08.587254] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.914 [2024-07-25 14:04:08.587264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.914 [2024-07-25 14:04:08.587273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.914 [2024-07-25 14:04:08.589862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.914 [2024-07-25 14:04:08.599424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.914 [2024-07-25 14:04:08.599794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-07-25 14:04:08.599812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.914 [2024-07-25 14:04:08.599822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.914 [2024-07-25 14:04:08.599979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.914 [2024-07-25 14:04:08.600136] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.914 [2024-07-25 14:04:08.600147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.914 [2024-07-25 14:04:08.600155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.914 [2024-07-25 14:04:08.602677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.914 [2024-07-25 14:04:08.612147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.914 [2024-07-25 14:04:08.612645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-07-25 14:04:08.612696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.914 [2024-07-25 14:04:08.612744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.914 [2024-07-25 14:04:08.613334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.914 [2024-07-25 14:04:08.613808] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.914 [2024-07-25 14:04:08.613819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.914 [2024-07-25 14:04:08.613828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.914 [2024-07-25 14:04:08.616398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.914 [2024-07-25 14:04:08.624980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.914 [2024-07-25 14:04:08.625497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-07-25 14:04:08.625518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.914 [2024-07-25 14:04:08.625528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.914 [2024-07-25 14:04:08.625686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.914 [2024-07-25 14:04:08.625851] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.914 [2024-07-25 14:04:08.625861] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.914 [2024-07-25 14:04:08.625870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.914 [2024-07-25 14:04:08.628403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.914 [2024-07-25 14:04:08.637709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.914 [2024-07-25 14:04:08.638185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-07-25 14:04:08.638202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.914 [2024-07-25 14:04:08.638211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.914 [2024-07-25 14:04:08.638368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.914 [2024-07-25 14:04:08.638525] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.914 [2024-07-25 14:04:08.638535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.914 [2024-07-25 14:04:08.638545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.914 [2024-07-25 14:04:08.641092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.914 [2024-07-25 14:04:08.650435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.914 [2024-07-25 14:04:08.650849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-07-25 14:04:08.650900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.914 [2024-07-25 14:04:08.650939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.914 [2024-07-25 14:04:08.651528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.914 [2024-07-25 14:04:08.651722] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.914 [2024-07-25 14:04:08.651733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.914 [2024-07-25 14:04:08.651759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.914 [2024-07-25 14:04:08.654281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.914 [2024-07-25 14:04:08.663147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.914 [2024-07-25 14:04:08.663619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-07-25 14:04:08.663669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.914 [2024-07-25 14:04:08.663701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.914 [2024-07-25 14:04:08.664307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.914 [2024-07-25 14:04:08.664752] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.914 [2024-07-25 14:04:08.664763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.914 [2024-07-25 14:04:08.664772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.914 [2024-07-25 14:04:08.667241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.914 [2024-07-25 14:04:08.675825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.914 [2024-07-25 14:04:08.676271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-07-25 14:04:08.676289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.914 [2024-07-25 14:04:08.676299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.914 [2024-07-25 14:04:08.676463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.914 [2024-07-25 14:04:08.676629] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.914 [2024-07-25 14:04:08.676640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.914 [2024-07-25 14:04:08.676649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.914 [2024-07-25 14:04:08.679228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.914 [2024-07-25 14:04:08.688575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.914 [2024-07-25 14:04:08.689115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-07-25 14:04:08.689166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.914 [2024-07-25 14:04:08.689199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.914 [2024-07-25 14:04:08.689647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.914 [2024-07-25 14:04:08.689822] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.914 [2024-07-25 14:04:08.689837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.914 [2024-07-25 14:04:08.689846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.914 [2024-07-25 14:04:08.692361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.914 [2024-07-25 14:04:08.701279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.914 [2024-07-25 14:04:08.701815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-07-25 14:04:08.701867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.914 [2024-07-25 14:04:08.701901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.914 [2024-07-25 14:04:08.702489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.914 [2024-07-25 14:04:08.703045] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.914 [2024-07-25 14:04:08.703056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.914 [2024-07-25 14:04:08.703066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.914 [2024-07-25 14:04:08.705572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.915 [2024-07-25 14:04:08.714069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.915 [2024-07-25 14:04:08.714585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-07-25 14:04:08.714644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.915 [2024-07-25 14:04:08.714677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.915 [2024-07-25 14:04:08.715283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.915 [2024-07-25 14:04:08.715825] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.915 [2024-07-25 14:04:08.715836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.915 [2024-07-25 14:04:08.715845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.915 [2024-07-25 14:04:08.718311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.915 [2024-07-25 14:04:08.726725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.915 [2024-07-25 14:04:08.727224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-07-25 14:04:08.727276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.915 [2024-07-25 14:04:08.727308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.915 [2024-07-25 14:04:08.727794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.915 [2024-07-25 14:04:08.727961] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.915 [2024-07-25 14:04:08.727972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.915 [2024-07-25 14:04:08.727981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.915 [2024-07-25 14:04:08.730493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.915 [2024-07-25 14:04:08.739408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.915 [2024-07-25 14:04:08.739923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-07-25 14:04:08.739940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.915 [2024-07-25 14:04:08.739950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.915 [2024-07-25 14:04:08.740115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.915 [2024-07-25 14:04:08.740281] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.915 [2024-07-25 14:04:08.740292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.915 [2024-07-25 14:04:08.740301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.915 [2024-07-25 14:04:08.742973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.915 [2024-07-25 14:04:08.752380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.915 [2024-07-25 14:04:08.752877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-07-25 14:04:08.752894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.915 [2024-07-25 14:04:08.752904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.915 [2024-07-25 14:04:08.753070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.915 [2024-07-25 14:04:08.753236] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.915 [2024-07-25 14:04:08.753247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.915 [2024-07-25 14:04:08.753256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.915 [2024-07-25 14:04:08.755768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.915 [2024-07-25 14:04:08.765130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.915 [2024-07-25 14:04:08.765642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-07-25 14:04:08.765659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.915 [2024-07-25 14:04:08.765668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.915 [2024-07-25 14:04:08.765851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.915 [2024-07-25 14:04:08.766018] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.915 [2024-07-25 14:04:08.766029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.915 [2024-07-25 14:04:08.766038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.915 [2024-07-25 14:04:08.768628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.915 [2024-07-25 14:04:08.778030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.915 [2024-07-25 14:04:08.778542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-07-25 14:04:08.778593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.915 [2024-07-25 14:04:08.778625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.915 [2024-07-25 14:04:08.779048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.915 [2024-07-25 14:04:08.779215] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.915 [2024-07-25 14:04:08.779226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.915 [2024-07-25 14:04:08.779235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.915 [2024-07-25 14:04:08.781831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:11.915 [2024-07-25 14:04:08.790672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:11.915 [2024-07-25 14:04:08.791208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-07-25 14:04:08.791262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:11.915 [2024-07-25 14:04:08.791294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:11.915 [2024-07-25 14:04:08.791664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:11.915 [2024-07-25 14:04:08.791911] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:11.915 [2024-07-25 14:04:08.791927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:11.915 [2024-07-25 14:04:08.791940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:11.915 [2024-07-25 14:04:08.795674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.192 [2024-07-25 14:04:08.803808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.192 [2024-07-25 14:04:08.804323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.192 [2024-07-25 14:04:08.804374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.192 [2024-07-25 14:04:08.804406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.192 [2024-07-25 14:04:08.805013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.192 [2024-07-25 14:04:08.805206] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.192 [2024-07-25 14:04:08.805218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.192 [2024-07-25 14:04:08.805227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.192 [2024-07-25 14:04:08.807927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.192 [2024-07-25 14:04:08.816750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.192 [2024-07-25 14:04:08.817287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.192 [2024-07-25 14:04:08.817338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.192 [2024-07-25 14:04:08.817370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.192 [2024-07-25 14:04:08.817851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.192 [2024-07-25 14:04:08.818025] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.192 [2024-07-25 14:04:08.818036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.192 [2024-07-25 14:04:08.818048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.192 [2024-07-25 14:04:08.820505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.192 [2024-07-25 14:04:08.829522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.192 [2024-07-25 14:04:08.830037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.192 [2024-07-25 14:04:08.830090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.192 [2024-07-25 14:04:08.830122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.192 [2024-07-25 14:04:08.830711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.192 [2024-07-25 14:04:08.831203] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.192 [2024-07-25 14:04:08.831215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.192 [2024-07-25 14:04:08.831223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.192 [2024-07-25 14:04:08.833718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.192 [2024-07-25 14:04:08.842204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.192 [2024-07-25 14:04:08.842719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.192 [2024-07-25 14:04:08.842737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.192 [2024-07-25 14:04:08.842746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.192 [2024-07-25 14:04:08.842902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.192 [2024-07-25 14:04:08.843059] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.192 [2024-07-25 14:04:08.843069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.192 [2024-07-25 14:04:08.843077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.192 [2024-07-25 14:04:08.845617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.192 [2024-07-25 14:04:08.855039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.192 [2024-07-25 14:04:08.855551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.192 [2024-07-25 14:04:08.855569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.192 [2024-07-25 14:04:08.855578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.192 [2024-07-25 14:04:08.855743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.192 [2024-07-25 14:04:08.855924] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.192 [2024-07-25 14:04:08.855935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.192 [2024-07-25 14:04:08.855944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.192 [2024-07-25 14:04:08.858461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.192 [2024-07-25 14:04:08.867738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.192 [2024-07-25 14:04:08.868252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.192 [2024-07-25 14:04:08.868310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.192 [2024-07-25 14:04:08.868343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.192 [2024-07-25 14:04:08.868949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.192 [2024-07-25 14:04:08.869454] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.192 [2024-07-25 14:04:08.869465] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.192 [2024-07-25 14:04:08.869474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.192 [2024-07-25 14:04:08.871957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.192 [2024-07-25 14:04:08.880453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.192 [2024-07-25 14:04:08.880971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.192 [2024-07-25 14:04:08.881023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.192 [2024-07-25 14:04:08.881056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.192 [2024-07-25 14:04:08.881645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.192 [2024-07-25 14:04:08.882253] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.192 [2024-07-25 14:04:08.882289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.192 [2024-07-25 14:04:08.882320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.192 [2024-07-25 14:04:08.886056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.192 [2024-07-25 14:04:08.894107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.192 [2024-07-25 14:04:08.894617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.192 [2024-07-25 14:04:08.894635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.192 [2024-07-25 14:04:08.894644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.192 [2024-07-25 14:04:08.894827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.192 [2024-07-25 14:04:08.894994] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.192 [2024-07-25 14:04:08.895005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.192 [2024-07-25 14:04:08.895013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.192 [2024-07-25 14:04:08.897525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.192 [2024-07-25 14:04:08.906769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.192 [2024-07-25 14:04:08.907305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.193 [2024-07-25 14:04:08.907357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.193 [2024-07-25 14:04:08.907390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.193 [2024-07-25 14:04:08.907996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.193 [2024-07-25 14:04:08.908420] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.193 [2024-07-25 14:04:08.908431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.193 [2024-07-25 14:04:08.908441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.193 [2024-07-25 14:04:08.910982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.193 [2024-07-25 14:04:08.919456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.193 [2024-07-25 14:04:08.919885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.193 [2024-07-25 14:04:08.919903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.193 [2024-07-25 14:04:08.919912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.193 [2024-07-25 14:04:08.920068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.193 [2024-07-25 14:04:08.920224] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.193 [2024-07-25 14:04:08.920235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.193 [2024-07-25 14:04:08.920243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.193 [2024-07-25 14:04:08.922788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.193 [2024-07-25 14:04:08.932147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.193 [2024-07-25 14:04:08.932514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.193 [2024-07-25 14:04:08.932555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.193 [2024-07-25 14:04:08.932588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.193 [2024-07-25 14:04:08.933143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.193 [2024-07-25 14:04:08.933310] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.193 [2024-07-25 14:04:08.933320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.193 [2024-07-25 14:04:08.933330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.193 [2024-07-25 14:04:08.935831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.193 [2024-07-25 14:04:08.944807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.193 [2024-07-25 14:04:08.945326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.193 [2024-07-25 14:04:08.945376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.193 [2024-07-25 14:04:08.945409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.193 [2024-07-25 14:04:08.946013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.193 [2024-07-25 14:04:08.946476] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.193 [2024-07-25 14:04:08.946487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.193 [2024-07-25 14:04:08.946496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.193 [2024-07-25 14:04:08.948987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.193 [2024-07-25 14:04:08.957536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.193 [2024-07-25 14:04:08.958039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.193 [2024-07-25 14:04:08.958056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.193 [2024-07-25 14:04:08.958065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.193 [2024-07-25 14:04:08.958221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.193 [2024-07-25 14:04:08.958378] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.193 [2024-07-25 14:04:08.958389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.193 [2024-07-25 14:04:08.958397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.193 [2024-07-25 14:04:08.960941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.193 [2024-07-25 14:04:08.970293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.193 [2024-07-25 14:04:08.970816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.193 [2024-07-25 14:04:08.970868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.193 [2024-07-25 14:04:08.970901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.193 [2024-07-25 14:04:08.971271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.193 [2024-07-25 14:04:08.971429] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.193 [2024-07-25 14:04:08.971440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.193 [2024-07-25 14:04:08.971448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.193 [2024-07-25 14:04:08.973993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.193 [2024-07-25 14:04:08.983066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.193 [2024-07-25 14:04:08.983584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.193 [2024-07-25 14:04:08.983635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.193 [2024-07-25 14:04:08.983667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.193 [2024-07-25 14:04:08.984192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.193 [2024-07-25 14:04:08.984359] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.193 [2024-07-25 14:04:08.984370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.193 [2024-07-25 14:04:08.984379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.193 [2024-07-25 14:04:08.986874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.193 [2024-07-25 14:04:08.995742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.193 [2024-07-25 14:04:08.996249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.193 [2024-07-25 14:04:08.996267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.193 [2024-07-25 14:04:08.996280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.193 [2024-07-25 14:04:08.996446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.193 [2024-07-25 14:04:08.996611] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.193 [2024-07-25 14:04:08.996622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.193 [2024-07-25 14:04:08.996631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.193 [2024-07-25 14:04:08.999319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.193 [2024-07-25 14:04:09.008477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.193 [2024-07-25 14:04:09.009007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.193 [2024-07-25 14:04:09.009060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.193 [2024-07-25 14:04:09.009093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.193 [2024-07-25 14:04:09.009600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.193 [2024-07-25 14:04:09.009777] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.193 [2024-07-25 14:04:09.009789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.193 [2024-07-25 14:04:09.009798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.193 [2024-07-25 14:04:09.012263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.193 [2024-07-25 14:04:09.021260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.193 [2024-07-25 14:04:09.021700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.193 [2024-07-25 14:04:09.021764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.193 [2024-07-25 14:04:09.021797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.193 [2024-07-25 14:04:09.022387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.193 [2024-07-25 14:04:09.022850] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.193 [2024-07-25 14:04:09.022862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.193 [2024-07-25 14:04:09.022871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.193 [2024-07-25 14:04:09.025339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.194 [2024-07-25 14:04:09.033982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.194 [2024-07-25 14:04:09.034345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.194 [2024-07-25 14:04:09.034362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.194 [2024-07-25 14:04:09.034372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.194 [2024-07-25 14:04:09.034529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.194 [2024-07-25 14:04:09.034685] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.194 [2024-07-25 14:04:09.034699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.194 [2024-07-25 14:04:09.034708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.194 [2024-07-25 14:04:09.037253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.194 [2024-07-25 14:04:09.046732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.194 [2024-07-25 14:04:09.047147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.194 [2024-07-25 14:04:09.047197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.194 [2024-07-25 14:04:09.047229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.194 [2024-07-25 14:04:09.047650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.194 [2024-07-25 14:04:09.047833] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.194 [2024-07-25 14:04:09.047845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.194 [2024-07-25 14:04:09.047854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.194 [2024-07-25 14:04:09.050373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.194 [2024-07-25 14:04:09.059500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.194 [2024-07-25 14:04:09.059993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.194 [2024-07-25 14:04:09.060011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.194 [2024-07-25 14:04:09.060020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.194 [2024-07-25 14:04:09.060177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.194 [2024-07-25 14:04:09.060334] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.194 [2024-07-25 14:04:09.060344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.194 [2024-07-25 14:04:09.060353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.194 [2024-07-25 14:04:09.062895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.464 [2024-07-25 14:04:09.072431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.464 [2024-07-25 14:04:09.072966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-07-25 14:04:09.073018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.464 [2024-07-25 14:04:09.073051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.464 [2024-07-25 14:04:09.073485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.464 [2024-07-25 14:04:09.073651] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.464 [2024-07-25 14:04:09.073663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.464 [2024-07-25 14:04:09.073672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.464 [2024-07-25 14:04:09.076360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.464 [2024-07-25 14:04:09.085350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.464 [2024-07-25 14:04:09.085731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-07-25 14:04:09.085784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.465 [2024-07-25 14:04:09.085817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.465 [2024-07-25 14:04:09.086404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.465 [2024-07-25 14:04:09.086591] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.465 [2024-07-25 14:04:09.086602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.465 [2024-07-25 14:04:09.086610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.465 [2024-07-25 14:04:09.089191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.465 [2024-07-25 14:04:09.098107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.465 [2024-07-25 14:04:09.098597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-07-25 14:04:09.098615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.465 [2024-07-25 14:04:09.098624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.465 [2024-07-25 14:04:09.098805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.465 [2024-07-25 14:04:09.098971] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.465 [2024-07-25 14:04:09.098982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.465 [2024-07-25 14:04:09.098991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.465 [2024-07-25 14:04:09.101498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.465 [2024-07-25 14:04:09.110870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.465 [2024-07-25 14:04:09.111372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-07-25 14:04:09.111389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.465 [2024-07-25 14:04:09.111398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.465 [2024-07-25 14:04:09.111555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.465 [2024-07-25 14:04:09.111710] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.465 [2024-07-25 14:04:09.111728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.465 [2024-07-25 14:04:09.111736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.465 [2024-07-25 14:04:09.114276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.465 [2024-07-25 14:04:09.123546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.465 [2024-07-25 14:04:09.124019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-07-25 14:04:09.124071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.465 [2024-07-25 14:04:09.124104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.465 [2024-07-25 14:04:09.124697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.465 [2024-07-25 14:04:09.124943] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.465 [2024-07-25 14:04:09.124958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.465 [2024-07-25 14:04:09.124972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.465 [2024-07-25 14:04:09.128704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.465 [2024-07-25 14:04:09.136751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.465 [2024-07-25 14:04:09.137277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-07-25 14:04:09.137329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.465 [2024-07-25 14:04:09.137362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.465 [2024-07-25 14:04:09.137855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.465 [2024-07-25 14:04:09.138022] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.465 [2024-07-25 14:04:09.138033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.465 [2024-07-25 14:04:09.138042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.465 [2024-07-25 14:04:09.140632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.465 [2024-07-25 14:04:09.149435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.465 [2024-07-25 14:04:09.149951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-07-25 14:04:09.150002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.465 [2024-07-25 14:04:09.150035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.465 [2024-07-25 14:04:09.150623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.465 [2024-07-25 14:04:09.151017] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.465 [2024-07-25 14:04:09.151028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.465 [2024-07-25 14:04:09.151037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.465 [2024-07-25 14:04:09.153547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.465 [2024-07-25 14:04:09.162176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.465 [2024-07-25 14:04:09.162680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-07-25 14:04:09.162697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.465 [2024-07-25 14:04:09.162707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.465 [2024-07-25 14:04:09.162888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.465 [2024-07-25 14:04:09.163054] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.465 [2024-07-25 14:04:09.163065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.465 [2024-07-25 14:04:09.163080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.465 [2024-07-25 14:04:09.165584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.465 [2024-07-25 14:04:09.174929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.465 [2024-07-25 14:04:09.175452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-07-25 14:04:09.175503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.465 [2024-07-25 14:04:09.175536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.465 [2024-07-25 14:04:09.175964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.465 [2024-07-25 14:04:09.176131] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.465 [2024-07-25 14:04:09.176141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.465 [2024-07-25 14:04:09.176150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.465 [2024-07-25 14:04:09.178741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.465 [2024-07-25 14:04:09.187712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.465 [2024-07-25 14:04:09.188243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-07-25 14:04:09.188295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.465 [2024-07-25 14:04:09.188327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.465 [2024-07-25 14:04:09.188934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.465 [2024-07-25 14:04:09.189280] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.465 [2024-07-25 14:04:09.189292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.465 [2024-07-25 14:04:09.189301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.465 [2024-07-25 14:04:09.191801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.465 [2024-07-25 14:04:09.200374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.465 [2024-07-25 14:04:09.200773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-07-25 14:04:09.200791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.465 [2024-07-25 14:04:09.200801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.465 [2024-07-25 14:04:09.200958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.465 [2024-07-25 14:04:09.201114] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.465 [2024-07-25 14:04:09.201125] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.465 [2024-07-25 14:04:09.201133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.465 [2024-07-25 14:04:09.203675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.465 [2024-07-25 14:04:09.213100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.465 [2024-07-25 14:04:09.213620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-07-25 14:04:09.213671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.466 [2024-07-25 14:04:09.213704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.466 [2024-07-25 14:04:09.214129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.466 [2024-07-25 14:04:09.214296] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.466 [2024-07-25 14:04:09.214307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.466 [2024-07-25 14:04:09.214316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.466 [2024-07-25 14:04:09.216819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.466 [2024-07-25 14:04:09.225740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.466 [2024-07-25 14:04:09.226264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-07-25 14:04:09.226315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.466 [2024-07-25 14:04:09.226347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.466 [2024-07-25 14:04:09.226951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.466 [2024-07-25 14:04:09.227192] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.466 [2024-07-25 14:04:09.227203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.466 [2024-07-25 14:04:09.227212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.466 [2024-07-25 14:04:09.229712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.466 [2024-07-25 14:04:09.238478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.466 [2024-07-25 14:04:09.239001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-07-25 14:04:09.239019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.466 [2024-07-25 14:04:09.239029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.466 [2024-07-25 14:04:09.239186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.466 [2024-07-25 14:04:09.239343] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.466 [2024-07-25 14:04:09.239353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.466 [2024-07-25 14:04:09.239361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.466 [2024-07-25 14:04:09.241906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.466 [2024-07-25 14:04:09.251264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.466 [2024-07-25 14:04:09.251775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-07-25 14:04:09.251793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.466 [2024-07-25 14:04:09.251802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.466 [2024-07-25 14:04:09.251971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.466 [2024-07-25 14:04:09.252136] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.466 [2024-07-25 14:04:09.252147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.466 [2024-07-25 14:04:09.252156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.466 [2024-07-25 14:04:09.254828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.466 [2024-07-25 14:04:09.264183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.466 [2024-07-25 14:04:09.264701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-07-25 14:04:09.264724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.466 [2024-07-25 14:04:09.264735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.466 [2024-07-25 14:04:09.264900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.466 [2024-07-25 14:04:09.265065] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.466 [2024-07-25 14:04:09.265076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.466 [2024-07-25 14:04:09.265085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.466 [2024-07-25 14:04:09.267589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.466 [2024-07-25 14:04:09.276996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.466 [2024-07-25 14:04:09.277510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-07-25 14:04:09.277528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.466 [2024-07-25 14:04:09.277538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.466 [2024-07-25 14:04:09.277702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.466 [2024-07-25 14:04:09.277874] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.466 [2024-07-25 14:04:09.277886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.466 [2024-07-25 14:04:09.277895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.466 [2024-07-25 14:04:09.280495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.466 [2024-07-25 14:04:09.289783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.466 [2024-07-25 14:04:09.290214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-07-25 14:04:09.290231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.466 [2024-07-25 14:04:09.290241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.466 [2024-07-25 14:04:09.290396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.466 [2024-07-25 14:04:09.290552] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.466 [2024-07-25 14:04:09.290562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.466 [2024-07-25 14:04:09.290574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.466 [2024-07-25 14:04:09.293121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.466 [2024-07-25 14:04:09.302549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.466 [2024-07-25 14:04:09.303072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-07-25 14:04:09.303124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.466 [2024-07-25 14:04:09.303156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.466 [2024-07-25 14:04:09.303762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.466 [2024-07-25 14:04:09.304255] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.466 [2024-07-25 14:04:09.304266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.466 [2024-07-25 14:04:09.304275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.466 [2024-07-25 14:04:09.306773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.466 [2024-07-25 14:04:09.315337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.466 [2024-07-25 14:04:09.315858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-07-25 14:04:09.315911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.466 [2024-07-25 14:04:09.315943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.466 [2024-07-25 14:04:09.316478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.466 [2024-07-25 14:04:09.316721] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.466 [2024-07-25 14:04:09.316736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.466 [2024-07-25 14:04:09.316749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.466 [2024-07-25 14:04:09.320481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.466 [2024-07-25 14:04:09.328491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.466 [2024-07-25 14:04:09.329002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-07-25 14:04:09.329020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.466 [2024-07-25 14:04:09.329029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.466 [2024-07-25 14:04:09.329186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.466 [2024-07-25 14:04:09.329343] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.466 [2024-07-25 14:04:09.329353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.466 [2024-07-25 14:04:09.329361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.467 [2024-07-25 14:04:09.331906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.467 [2024-07-25 14:04:09.341355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.467 [2024-07-25 14:04:09.341891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-07-25 14:04:09.341950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.467 [2024-07-25 14:04:09.341983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.467 [2024-07-25 14:04:09.342483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.467 [2024-07-25 14:04:09.342654] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.467 [2024-07-25 14:04:09.342665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.467 [2024-07-25 14:04:09.342674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.467 [2024-07-25 14:04:09.345329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.726 [2024-07-25 14:04:09.354287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.726 [2024-07-25 14:04:09.354813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.726 [2024-07-25 14:04:09.354832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.726 [2024-07-25 14:04:09.354843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.726 [2024-07-25 14:04:09.355015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.726 [2024-07-25 14:04:09.355185] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.726 [2024-07-25 14:04:09.355196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.726 [2024-07-25 14:04:09.355206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.726 [2024-07-25 14:04:09.357879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.726 [2024-07-25 14:04:09.367167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.726 [2024-07-25 14:04:09.367612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.726 [2024-07-25 14:04:09.367631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.726 [2024-07-25 14:04:09.367641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.726 [2024-07-25 14:04:09.367816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.726 [2024-07-25 14:04:09.367986] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.726 [2024-07-25 14:04:09.367998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.726 [2024-07-25 14:04:09.368008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.726 [2024-07-25 14:04:09.370674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.726 [2024-07-25 14:04:09.380125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.726 [2024-07-25 14:04:09.380564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.726 [2024-07-25 14:04:09.380582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.726 [2024-07-25 14:04:09.380591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.726 [2024-07-25 14:04:09.380785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.726 [2024-07-25 14:04:09.380960] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.726 [2024-07-25 14:04:09.380971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.726 [2024-07-25 14:04:09.380980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.726 [2024-07-25 14:04:09.383647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.726 [2024-07-25 14:04:09.392803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.726 [2024-07-25 14:04:09.393213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.726 [2024-07-25 14:04:09.393232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.726 [2024-07-25 14:04:09.393242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.726 [2024-07-25 14:04:09.393400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.726 [2024-07-25 14:04:09.393556] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.726 [2024-07-25 14:04:09.393566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.726 [2024-07-25 14:04:09.393575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.726 [2024-07-25 14:04:09.396123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.726 [2024-07-25 14:04:09.405557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.726 [2024-07-25 14:04:09.405999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.726 [2024-07-25 14:04:09.406016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.726 [2024-07-25 14:04:09.406025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.726 [2024-07-25 14:04:09.406183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.726 [2024-07-25 14:04:09.406339] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.726 [2024-07-25 14:04:09.406350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.726 [2024-07-25 14:04:09.406358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.726 [2024-07-25 14:04:09.408903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.726 [2024-07-25 14:04:09.418318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.726 [2024-07-25 14:04:09.418811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.726 [2024-07-25 14:04:09.418829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.726 [2024-07-25 14:04:09.418838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.726 [2024-07-25 14:04:09.418994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.726 [2024-07-25 14:04:09.419151] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.726 [2024-07-25 14:04:09.419161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.726 [2024-07-25 14:04:09.419169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.726 [2024-07-25 14:04:09.421712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.726 [2024-07-25 14:04:09.431115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.726 [2024-07-25 14:04:09.431573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.726 [2024-07-25 14:04:09.431624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.726 [2024-07-25 14:04:09.431656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.726 [2024-07-25 14:04:09.432268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 509209 Killed "${NVMF_APP[@]}" "$@" 00:36:12.726 [2024-07-25 14:04:09.432518] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.726 [2024-07-25 14:04:09.432529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.726 [2024-07-25 14:04:09.432540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.726 14:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:36:12.726 14:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:12.726 14:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:12.726 14:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:12.726 14:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:12.726 [2024-07-25 14:04:09.435152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.727 14:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=510565 00:36:12.727 14:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 510565 00:36:12.727 14:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:12.727 14:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 510565 ']' 00:36:12.727 14:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:12.727 [2024-07-25 14:04:09.444111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.727 14:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:12.727 [2024-07-25 14:04:09.444552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.727 14:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:12.727 [2024-07-25 14:04:09.444571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:12.727 [2024-07-25 14:04:09.444583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.727 [2024-07-25 14:04:09.444761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.727 14:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:12.727 [2024-07-25 14:04:09.444931] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.727 [2024-07-25 14:04:09.444944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.727 [2024-07-25 14:04:09.444953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.727 14:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:12.727 [2024-07-25 14:04:09.447622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.727 [2024-07-25 14:04:09.457061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.727 [2024-07-25 14:04:09.457585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.727 [2024-07-25 14:04:09.457604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.727 [2024-07-25 14:04:09.457614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.727 [2024-07-25 14:04:09.457788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.727 [2024-07-25 14:04:09.457959] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.727 [2024-07-25 14:04:09.457970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.727 [2024-07-25 14:04:09.457980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.727 [2024-07-25 14:04:09.460647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.727 [2024-07-25 14:04:09.469940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.727 [2024-07-25 14:04:09.470458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.727 [2024-07-25 14:04:09.470476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.727 [2024-07-25 14:04:09.470486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.727 [2024-07-25 14:04:09.470655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.727 [2024-07-25 14:04:09.470832] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.727 [2024-07-25 14:04:09.470844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.727 [2024-07-25 14:04:09.470853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.727 [2024-07-25 14:04:09.473520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.727 [2024-07-25 14:04:09.482790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.727 [2024-07-25 14:04:09.483310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.727 [2024-07-25 14:04:09.483328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.727 [2024-07-25 14:04:09.483337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.727 [2024-07-25 14:04:09.483503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.727 [2024-07-25 14:04:09.483669] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.727 [2024-07-25 14:04:09.483680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.727 [2024-07-25 14:04:09.483690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.727 [2024-07-25 14:04:09.486348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.727 [2024-07-25 14:04:09.495705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.727 [2024-07-25 14:04:09.495789] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:36:12.727 [2024-07-25 14:04:09.495835] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:12.727 [2024-07-25 14:04:09.496230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.727 [2024-07-25 14:04:09.496247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.727 [2024-07-25 14:04:09.496257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.727 [2024-07-25 14:04:09.496427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.727 [2024-07-25 14:04:09.496595] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.727 [2024-07-25 14:04:09.496605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.727 [2024-07-25 14:04:09.496614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.727 [2024-07-25 14:04:09.499286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.727 [2024-07-25 14:04:09.508871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.727 [2024-07-25 14:04:09.509381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.727 [2024-07-25 14:04:09.509400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.727 [2024-07-25 14:04:09.509411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.727 [2024-07-25 14:04:09.509582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.727 [2024-07-25 14:04:09.509757] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.727 [2024-07-25 14:04:09.509769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.727 [2024-07-25 14:04:09.509779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.727 [2024-07-25 14:04:09.512456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.727 [2024-07-25 14:04:09.521755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.727 [2024-07-25 14:04:09.522267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.727 [2024-07-25 14:04:09.522286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.727 [2024-07-25 14:04:09.522296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.727 [2024-07-25 14:04:09.522468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.727 [2024-07-25 14:04:09.522637] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.727 [2024-07-25 14:04:09.522648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.727 [2024-07-25 14:04:09.522657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.727 [2024-07-25 14:04:09.525441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.727 EAL: No free 2048 kB hugepages reported on node 1 00:36:12.727 [2024-07-25 14:04:09.534651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.727 [2024-07-25 14:04:09.535181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.727 [2024-07-25 14:04:09.535201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.728 [2024-07-25 14:04:09.535215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.728 [2024-07-25 14:04:09.535380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.728 [2024-07-25 14:04:09.535545] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.728 [2024-07-25 14:04:09.535557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.728 [2024-07-25 14:04:09.535565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.728 [2024-07-25 14:04:09.537586] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:12.728 [2024-07-25 14:04:09.538226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.728 [2024-07-25 14:04:09.547673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.728 [2024-07-25 14:04:09.548226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.728 [2024-07-25 14:04:09.548245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.728 [2024-07-25 14:04:09.548255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.728 [2024-07-25 14:04:09.548425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.728 [2024-07-25 14:04:09.548595] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.728 [2024-07-25 14:04:09.548606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.728 [2024-07-25 14:04:09.548616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.728 [2024-07-25 14:04:09.551287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.728 [2024-07-25 14:04:09.560582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.728 [2024-07-25 14:04:09.561113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.728 [2024-07-25 14:04:09.561132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.728 [2024-07-25 14:04:09.561142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.728 [2024-07-25 14:04:09.561313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.728 [2024-07-25 14:04:09.561482] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.728 [2024-07-25 14:04:09.561493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.728 [2024-07-25 14:04:09.561502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.728 [2024-07-25 14:04:09.564143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.728 [2024-07-25 14:04:09.571480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:12.728 [2024-07-25 14:04:09.573476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.728 [2024-07-25 14:04:09.574006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.728 [2024-07-25 14:04:09.574025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.728 [2024-07-25 14:04:09.574035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.728 [2024-07-25 14:04:09.574204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.728 [2024-07-25 14:04:09.574370] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.728 [2024-07-25 14:04:09.574381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.728 [2024-07-25 14:04:09.574390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.728 [2024-07-25 14:04:09.577007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.728 [2024-07-25 14:04:09.586386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.728 [2024-07-25 14:04:09.586820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.728 [2024-07-25 14:04:09.586839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.728 [2024-07-25 14:04:09.586849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.728 [2024-07-25 14:04:09.587015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.728 [2024-07-25 14:04:09.587181] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.728 [2024-07-25 14:04:09.587194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.728 [2024-07-25 14:04:09.587204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.728 [2024-07-25 14:04:09.589850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.728 [2024-07-25 14:04:09.599329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.728 [2024-07-25 14:04:09.599896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.728 [2024-07-25 14:04:09.599920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.728 [2024-07-25 14:04:09.599931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.728 [2024-07-25 14:04:09.600101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.728 [2024-07-25 14:04:09.600269] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.728 [2024-07-25 14:04:09.600281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.728 [2024-07-25 14:04:09.600291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.728 [2024-07-25 14:04:09.602947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.728 [2024-07-25 14:04:09.611391] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:12.728 [2024-07-25 14:04:09.611422] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:12.728 [2024-07-25 14:04:09.611432] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:12.728 [2024-07-25 14:04:09.611441] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:12.728 [2024-07-25 14:04:09.611449] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:12.728 [2024-07-25 14:04:09.611491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:12.728 [2024-07-25 14:04:09.611573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:12.728 [2024-07-25 14:04:09.611575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:12.728 [2024-07-25 14:04:09.612342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.988 [2024-07-25 14:04:09.612893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.988 [2024-07-25 14:04:09.612915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.988 [2024-07-25 14:04:09.612926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.988 [2024-07-25 14:04:09.613098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.988 [2024-07-25 14:04:09.613271] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.988 [2024-07-25 14:04:09.613284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.988 [2024-07-25 14:04:09.613294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.988 [2024-07-25 14:04:09.615965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.988 [2024-07-25 14:04:09.625271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.988 [2024-07-25 14:04:09.625725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.988 [2024-07-25 14:04:09.625750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.988 [2024-07-25 14:04:09.625762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.988 [2024-07-25 14:04:09.625934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.988 [2024-07-25 14:04:09.626108] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.988 [2024-07-25 14:04:09.626120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.988 [2024-07-25 14:04:09.626131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.988 [2024-07-25 14:04:09.628804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.988 [2024-07-25 14:04:09.638274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.988 [2024-07-25 14:04:09.638791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.988 [2024-07-25 14:04:09.638814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.988 [2024-07-25 14:04:09.638825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.988 [2024-07-25 14:04:09.638998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.988 [2024-07-25 14:04:09.639170] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.988 [2024-07-25 14:04:09.639182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.988 [2024-07-25 14:04:09.639193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.988 [2024-07-25 14:04:09.641864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.988 [2024-07-25 14:04:09.651305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.988 [2024-07-25 14:04:09.651855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.988 [2024-07-25 14:04:09.651877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.988 [2024-07-25 14:04:09.651888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.988 [2024-07-25 14:04:09.652065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.988 [2024-07-25 14:04:09.652237] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.988 [2024-07-25 14:04:09.652249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.988 [2024-07-25 14:04:09.652259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.988 [2024-07-25 14:04:09.654932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.988 [2024-07-25 14:04:09.664234] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.988 [2024-07-25 14:04:09.664710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.988 [2024-07-25 14:04:09.664737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.988 [2024-07-25 14:04:09.664749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.988 [2024-07-25 14:04:09.664922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.988 [2024-07-25 14:04:09.665094] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.988 [2024-07-25 14:04:09.665106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.988 [2024-07-25 14:04:09.665119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.988 [2024-07-25 14:04:09.667793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.988 [2024-07-25 14:04:09.677241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.988 [2024-07-25 14:04:09.677747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.988 [2024-07-25 14:04:09.677766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.988 [2024-07-25 14:04:09.677777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.988 [2024-07-25 14:04:09.677948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.988 [2024-07-25 14:04:09.678118] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.988 [2024-07-25 14:04:09.678130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.988 [2024-07-25 14:04:09.678139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.988 [2024-07-25 14:04:09.680819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.988 [2024-07-25 14:04:09.690114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.988 [2024-07-25 14:04:09.690642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.989 [2024-07-25 14:04:09.690661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.989 [2024-07-25 14:04:09.690671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.989 [2024-07-25 14:04:09.690848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.989 [2024-07-25 14:04:09.691019] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.989 [2024-07-25 14:04:09.691030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.989 [2024-07-25 14:04:09.691044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.989 [2024-07-25 14:04:09.693705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.989 [2024-07-25 14:04:09.703005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.989 [2024-07-25 14:04:09.703529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.989 [2024-07-25 14:04:09.703548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.989 [2024-07-25 14:04:09.703557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.989 [2024-07-25 14:04:09.703735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.989 [2024-07-25 14:04:09.703905] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.989 [2024-07-25 14:04:09.703917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.989 [2024-07-25 14:04:09.703926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.989 [2024-07-25 14:04:09.706593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.989 [2024-07-25 14:04:09.715899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.989 [2024-07-25 14:04:09.716404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.989 [2024-07-25 14:04:09.716423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.989 [2024-07-25 14:04:09.716433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.989 [2024-07-25 14:04:09.716604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.989 [2024-07-25 14:04:09.716781] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.989 [2024-07-25 14:04:09.716793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.989 [2024-07-25 14:04:09.716802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.989 [2024-07-25 14:04:09.719467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.989 [2024-07-25 14:04:09.728907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.989 [2024-07-25 14:04:09.729364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.989 [2024-07-25 14:04:09.729382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.989 [2024-07-25 14:04:09.729393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.989 [2024-07-25 14:04:09.729565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.989 [2024-07-25 14:04:09.729742] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.989 [2024-07-25 14:04:09.729753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.989 [2024-07-25 14:04:09.729762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.989 [2024-07-25 14:04:09.732426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.989 [2024-07-25 14:04:09.741883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.989 [2024-07-25 14:04:09.742393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.989 [2024-07-25 14:04:09.742412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.989 [2024-07-25 14:04:09.742422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.989 [2024-07-25 14:04:09.742597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.989 [2024-07-25 14:04:09.742777] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.989 [2024-07-25 14:04:09.742789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.989 [2024-07-25 14:04:09.742798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.989 [2024-07-25 14:04:09.745460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.989 [2024-07-25 14:04:09.754763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.989 [2024-07-25 14:04:09.755139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.989 [2024-07-25 14:04:09.755158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.989 [2024-07-25 14:04:09.755168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.989 [2024-07-25 14:04:09.755339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.989 [2024-07-25 14:04:09.755509] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.989 [2024-07-25 14:04:09.755520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.989 [2024-07-25 14:04:09.755530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.989 [2024-07-25 14:04:09.758206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.989 [2024-07-25 14:04:09.767661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.989 [2024-07-25 14:04:09.768173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.989 [2024-07-25 14:04:09.768193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.989 [2024-07-25 14:04:09.768203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.989 [2024-07-25 14:04:09.768373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.989 [2024-07-25 14:04:09.768543] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.989 [2024-07-25 14:04:09.768555] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.989 [2024-07-25 14:04:09.768564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.989 [2024-07-25 14:04:09.771233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.989 [2024-07-25 14:04:09.780681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.989 [2024-07-25 14:04:09.781196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.989 [2024-07-25 14:04:09.781215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.989 [2024-07-25 14:04:09.781224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.989 [2024-07-25 14:04:09.781395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.989 [2024-07-25 14:04:09.781569] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.989 [2024-07-25 14:04:09.781581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.989 [2024-07-25 14:04:09.781590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.989 [2024-07-25 14:04:09.784265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.989 [2024-07-25 14:04:09.793546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.989 [2024-07-25 14:04:09.793943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.989 [2024-07-25 14:04:09.793962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.989 [2024-07-25 14:04:09.793972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.989 [2024-07-25 14:04:09.794142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.989 [2024-07-25 14:04:09.794312] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.989 [2024-07-25 14:04:09.794323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.989 [2024-07-25 14:04:09.794331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.989 [2024-07-25 14:04:09.797001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.989 [2024-07-25 14:04:09.806454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.989 [2024-07-25 14:04:09.806978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.989 [2024-07-25 14:04:09.806997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.989 [2024-07-25 14:04:09.807008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.989 [2024-07-25 14:04:09.807178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.989 [2024-07-25 14:04:09.807348] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.989 [2024-07-25 14:04:09.807359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.989 [2024-07-25 14:04:09.807368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.989 [2024-07-25 14:04:09.810042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.989 [2024-07-25 14:04:09.819474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.989 [2024-07-25 14:04:09.819998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.989 [2024-07-25 14:04:09.820017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.990 [2024-07-25 14:04:09.820027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.990 [2024-07-25 14:04:09.820197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.990 [2024-07-25 14:04:09.820368] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.990 [2024-07-25 14:04:09.820379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.990 [2024-07-25 14:04:09.820388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.990 [2024-07-25 14:04:09.823062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.990 [2024-07-25 14:04:09.832356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.990 [2024-07-25 14:04:09.832809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.990 [2024-07-25 14:04:09.832828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.990 [2024-07-25 14:04:09.832838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.990 [2024-07-25 14:04:09.833008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.990 [2024-07-25 14:04:09.833177] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.990 [2024-07-25 14:04:09.833188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.990 [2024-07-25 14:04:09.833198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.990 [2024-07-25 14:04:09.835869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.990 [2024-07-25 14:04:09.845323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.990 [2024-07-25 14:04:09.845826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.990 [2024-07-25 14:04:09.845845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.990 [2024-07-25 14:04:09.845855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.990 [2024-07-25 14:04:09.846025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.990 [2024-07-25 14:04:09.846196] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.990 [2024-07-25 14:04:09.846207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.990 [2024-07-25 14:04:09.846216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.990 [2024-07-25 14:04:09.848891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.990 [2024-07-25 14:04:09.858339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.990 [2024-07-25 14:04:09.858795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.990 [2024-07-25 14:04:09.858814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.990 [2024-07-25 14:04:09.858824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.990 [2024-07-25 14:04:09.858994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.990 [2024-07-25 14:04:09.859164] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.990 [2024-07-25 14:04:09.859176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.990 [2024-07-25 14:04:09.859185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.990 [2024-07-25 14:04:09.861857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.990 [2024-07-25 14:04:09.871290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.990 [2024-07-25 14:04:09.871724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.990 [2024-07-25 14:04:09.871742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:12.990 [2024-07-25 14:04:09.871757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:12.990 [2024-07-25 14:04:09.871927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:12.990 [2024-07-25 14:04:09.872097] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:12.990 [2024-07-25 14:04:09.872108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:12.990 [2024-07-25 14:04:09.872118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:12.990 [2024-07-25 14:04:09.874793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.251 [2024-07-25 14:04:09.884278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.251 [2024-07-25 14:04:09.884764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.251 [2024-07-25 14:04:09.884785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.251 [2024-07-25 14:04:09.884795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.251 [2024-07-25 14:04:09.884966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.251 [2024-07-25 14:04:09.885136] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.251 [2024-07-25 14:04:09.885147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.251 [2024-07-25 14:04:09.885156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.251 [2024-07-25 14:04:09.887826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.251 [2024-07-25 14:04:09.897283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.251 [2024-07-25 14:04:09.897746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.251 [2024-07-25 14:04:09.897765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.251 [2024-07-25 14:04:09.897775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.251 [2024-07-25 14:04:09.897945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.251 [2024-07-25 14:04:09.898115] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.251 [2024-07-25 14:04:09.898127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.251 [2024-07-25 14:04:09.898137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.251 [2024-07-25 14:04:09.900807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.251 [2024-07-25 14:04:09.910241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.251 [2024-07-25 14:04:09.910780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.251 [2024-07-25 14:04:09.910799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.251 [2024-07-25 14:04:09.910810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.251 [2024-07-25 14:04:09.910979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.251 [2024-07-25 14:04:09.911152] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.251 [2024-07-25 14:04:09.911164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.251 [2024-07-25 14:04:09.911173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.251 [2024-07-25 14:04:09.913852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.251 [2024-07-25 14:04:09.923143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.251 [2024-07-25 14:04:09.923619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.251 [2024-07-25 14:04:09.923638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.251 [2024-07-25 14:04:09.923648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.251 [2024-07-25 14:04:09.923824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.251 [2024-07-25 14:04:09.923994] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.251 [2024-07-25 14:04:09.924005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.251 [2024-07-25 14:04:09.924014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.251 [2024-07-25 14:04:09.926683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.251 [2024-07-25 14:04:09.936140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.251 [2024-07-25 14:04:09.936686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.251 [2024-07-25 14:04:09.936704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.251 [2024-07-25 14:04:09.936720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.251 [2024-07-25 14:04:09.936891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.251 [2024-07-25 14:04:09.937060] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.251 [2024-07-25 14:04:09.937071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.251 [2024-07-25 14:04:09.937081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.251 [2024-07-25 14:04:09.939749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.251 [2024-07-25 14:04:09.949040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.251 [2024-07-25 14:04:09.949496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.251 [2024-07-25 14:04:09.949516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.251 [2024-07-25 14:04:09.949528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.251 [2024-07-25 14:04:09.949699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.251 [2024-07-25 14:04:09.949875] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.251 [2024-07-25 14:04:09.949887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.251 [2024-07-25 14:04:09.949896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.251 [2024-07-25 14:04:09.952560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.251 [2024-07-25 14:04:09.962003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.251 [2024-07-25 14:04:09.962546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.251 [2024-07-25 14:04:09.962565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.251 [2024-07-25 14:04:09.962575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.251 [2024-07-25 14:04:09.962751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.251 [2024-07-25 14:04:09.962922] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.251 [2024-07-25 14:04:09.962933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.251 [2024-07-25 14:04:09.962942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.251 [2024-07-25 14:04:09.965605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.251 [2024-07-25 14:04:09.974899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.251 [2024-07-25 14:04:09.975356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.251 [2024-07-25 14:04:09.975375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.251 [2024-07-25 14:04:09.975384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.251 [2024-07-25 14:04:09.975555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.251 [2024-07-25 14:04:09.975729] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.251 [2024-07-25 14:04:09.975741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.251 [2024-07-25 14:04:09.975751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.251 [2024-07-25 14:04:09.978410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.251 [2024-07-25 14:04:09.987860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.251 [2024-07-25 14:04:09.988362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.252 [2024-07-25 14:04:09.988382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.252 [2024-07-25 14:04:09.988392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.252 [2024-07-25 14:04:09.988561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.252 [2024-07-25 14:04:09.988737] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.252 [2024-07-25 14:04:09.988749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.252 [2024-07-25 14:04:09.988758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.252 [2024-07-25 14:04:09.991423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.252 [2024-07-25 14:04:10.000868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.252 [2024-07-25 14:04:10.001399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.252 [2024-07-25 14:04:10.001417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.252 [2024-07-25 14:04:10.001430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.252 [2024-07-25 14:04:10.001601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.252 [2024-07-25 14:04:10.001776] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.252 [2024-07-25 14:04:10.001788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.252 [2024-07-25 14:04:10.001797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.252 [2024-07-25 14:04:10.004460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.252 [2024-07-25 14:04:10.013763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.252 [2024-07-25 14:04:10.014292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.252 [2024-07-25 14:04:10.014311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.252 [2024-07-25 14:04:10.014321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.252 [2024-07-25 14:04:10.014490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.252 [2024-07-25 14:04:10.014662] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.252 [2024-07-25 14:04:10.014674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.252 [2024-07-25 14:04:10.014683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.252 [2024-07-25 14:04:10.017355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.252 [2024-07-25 14:04:10.026647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.252 [2024-07-25 14:04:10.027037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.252 [2024-07-25 14:04:10.027056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.252 [2024-07-25 14:04:10.027067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.252 [2024-07-25 14:04:10.027257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.252 [2024-07-25 14:04:10.027436] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.252 [2024-07-25 14:04:10.027448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.252 [2024-07-25 14:04:10.027458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.252 [2024-07-25 14:04:10.030359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.252 [2024-07-25 14:04:10.039594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.252 [2024-07-25 14:04:10.040047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.252 [2024-07-25 14:04:10.040066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.252 [2024-07-25 14:04:10.040077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.252 [2024-07-25 14:04:10.040248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.252 [2024-07-25 14:04:10.040420] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.252 [2024-07-25 14:04:10.040434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.252 [2024-07-25 14:04:10.040443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.252 [2024-07-25 14:04:10.043106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.252 [2024-07-25 14:04:10.052558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.252 [2024-07-25 14:04:10.053088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.252 [2024-07-25 14:04:10.053107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.252 [2024-07-25 14:04:10.053117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.252 [2024-07-25 14:04:10.053288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.252 [2024-07-25 14:04:10.053459] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.252 [2024-07-25 14:04:10.053470] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.252 [2024-07-25 14:04:10.053479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.252 [2024-07-25 14:04:10.056153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.252 [2024-07-25 14:04:10.065443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.252 [2024-07-25 14:04:10.065970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.252 [2024-07-25 14:04:10.065989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.252 [2024-07-25 14:04:10.065999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.252 [2024-07-25 14:04:10.066168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.252 [2024-07-25 14:04:10.066339] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.252 [2024-07-25 14:04:10.066350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.252 [2024-07-25 14:04:10.066359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.252 [2024-07-25 14:04:10.069025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.252 [2024-07-25 14:04:10.078303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.252 [2024-07-25 14:04:10.078809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.252 [2024-07-25 14:04:10.078828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.252 [2024-07-25 14:04:10.078838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.252 [2024-07-25 14:04:10.079009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.252 [2024-07-25 14:04:10.079179] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.252 [2024-07-25 14:04:10.079190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.252 [2024-07-25 14:04:10.079199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.252 [2024-07-25 14:04:10.081875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.252 [2024-07-25 14:04:10.091315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.252 [2024-07-25 14:04:10.091738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.252 [2024-07-25 14:04:10.091756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.252 [2024-07-25 14:04:10.091767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.252 [2024-07-25 14:04:10.091937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.252 [2024-07-25 14:04:10.092106] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.252 [2024-07-25 14:04:10.092117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.252 [2024-07-25 14:04:10.092127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.252 [2024-07-25 14:04:10.094797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.252 [2024-07-25 14:04:10.104360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.252 [2024-07-25 14:04:10.104812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.252 [2024-07-25 14:04:10.104831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.252 [2024-07-25 14:04:10.104842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.252 [2024-07-25 14:04:10.105012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.252 [2024-07-25 14:04:10.105182] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.252 [2024-07-25 14:04:10.105194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.252 [2024-07-25 14:04:10.105203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.252 [2024-07-25 14:04:10.107873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.252 [2024-07-25 14:04:10.117324] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.252 [2024-07-25 14:04:10.117833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.252 [2024-07-25 14:04:10.117852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.253 [2024-07-25 14:04:10.117863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.253 [2024-07-25 14:04:10.118033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.253 [2024-07-25 14:04:10.118204] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.253 [2024-07-25 14:04:10.118217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.253 [2024-07-25 14:04:10.118227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.253 [2024-07-25 14:04:10.120899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.253 [2024-07-25 14:04:10.130184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.253 [2024-07-25 14:04:10.130684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.253 [2024-07-25 14:04:10.130701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.253 [2024-07-25 14:04:10.130711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.253 [2024-07-25 14:04:10.130889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.253 [2024-07-25 14:04:10.131060] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.253 [2024-07-25 14:04:10.131070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.253 [2024-07-25 14:04:10.131079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.253 [2024-07-25 14:04:10.133747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.514 [2024-07-25 14:04:10.143193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.514 [2024-07-25 14:04:10.143655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.514 [2024-07-25 14:04:10.143673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.514 [2024-07-25 14:04:10.143684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.514 [2024-07-25 14:04:10.143860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.514 [2024-07-25 14:04:10.144029] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.514 [2024-07-25 14:04:10.144040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.514 [2024-07-25 14:04:10.144049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.514 [2024-07-25 14:04:10.146713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.514 [2024-07-25 14:04:10.156169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.514 [2024-07-25 14:04:10.156608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.514 [2024-07-25 14:04:10.156626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.514 [2024-07-25 14:04:10.156635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.514 [2024-07-25 14:04:10.156811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.514 [2024-07-25 14:04:10.156981] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.514 [2024-07-25 14:04:10.156992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.514 [2024-07-25 14:04:10.157001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.514 [2024-07-25 14:04:10.159667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.514 [2024-07-25 14:04:10.169111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.514 [2024-07-25 14:04:10.169614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.514 [2024-07-25 14:04:10.169632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.514 [2024-07-25 14:04:10.169641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.514 [2024-07-25 14:04:10.169816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.514 [2024-07-25 14:04:10.169986] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.514 [2024-07-25 14:04:10.169997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.514 [2024-07-25 14:04:10.170009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.514 [2024-07-25 14:04:10.172676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.514 [2024-07-25 14:04:10.182119] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.514 [2024-07-25 14:04:10.182648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.514 [2024-07-25 14:04:10.182666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.514 [2024-07-25 14:04:10.182675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.514 [2024-07-25 14:04:10.182850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.514 [2024-07-25 14:04:10.183020] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.514 [2024-07-25 14:04:10.183031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.514 [2024-07-25 14:04:10.183040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.515 [2024-07-25 14:04:10.185704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.515 [2024-07-25 14:04:10.194992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.515 [2024-07-25 14:04:10.195417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.515 [2024-07-25 14:04:10.195435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.515 [2024-07-25 14:04:10.195445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.515 [2024-07-25 14:04:10.195614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.515 [2024-07-25 14:04:10.195789] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.515 [2024-07-25 14:04:10.195800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.515 [2024-07-25 14:04:10.195809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.515 [2024-07-25 14:04:10.198475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.515 [2024-07-25 14:04:10.207928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.515 [2024-07-25 14:04:10.208391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.515 [2024-07-25 14:04:10.208409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.515 [2024-07-25 14:04:10.208419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.515 [2024-07-25 14:04:10.208590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.515 [2024-07-25 14:04:10.208765] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.515 [2024-07-25 14:04:10.208777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.515 [2024-07-25 14:04:10.208786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.515 [2024-07-25 14:04:10.211450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.515 [2024-07-25 14:04:10.220902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.515 [2024-07-25 14:04:10.221369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.515 [2024-07-25 14:04:10.221390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.515 [2024-07-25 14:04:10.221400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.515 [2024-07-25 14:04:10.221569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.515 [2024-07-25 14:04:10.221743] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.515 [2024-07-25 14:04:10.221755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.515 [2024-07-25 14:04:10.221765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.515 [2024-07-25 14:04:10.224429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.515 [2024-07-25 14:04:10.233877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.515 [2024-07-25 14:04:10.234400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.515 [2024-07-25 14:04:10.234418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.515 [2024-07-25 14:04:10.234428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.515 [2024-07-25 14:04:10.234597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.515 [2024-07-25 14:04:10.234772] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.515 [2024-07-25 14:04:10.234783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.515 [2024-07-25 14:04:10.234792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.515 [2024-07-25 14:04:10.237464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.515 [2024-07-25 14:04:10.246769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.515 [2024-07-25 14:04:10.247251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.515 [2024-07-25 14:04:10.247270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.515 [2024-07-25 14:04:10.247280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.515 [2024-07-25 14:04:10.247451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.515 [2024-07-25 14:04:10.247620] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.515 [2024-07-25 14:04:10.247631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.515 [2024-07-25 14:04:10.247640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.515 [2024-07-25 14:04:10.250311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.515 [2024-07-25 14:04:10.259753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.515 [2024-07-25 14:04:10.260258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.515 [2024-07-25 14:04:10.260276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.515 [2024-07-25 14:04:10.260286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.515 [2024-07-25 14:04:10.260454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.515 [2024-07-25 14:04:10.260627] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.515 [2024-07-25 14:04:10.260638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.515 [2024-07-25 14:04:10.260647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.515 [2024-07-25 14:04:10.263322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.515 [2024-07-25 14:04:10.272770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.515 [2024-07-25 14:04:10.273206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.515 [2024-07-25 14:04:10.273225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.515 [2024-07-25 14:04:10.273235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.515 [2024-07-25 14:04:10.273404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.515 [2024-07-25 14:04:10.273574] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.515 [2024-07-25 14:04:10.273585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.515 [2024-07-25 14:04:10.273594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.515 [2024-07-25 14:04:10.276267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.515 [2024-07-25 14:04:10.285722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.515 [2024-07-25 14:04:10.286176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.515 [2024-07-25 14:04:10.286193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.515 [2024-07-25 14:04:10.286203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.515 [2024-07-25 14:04:10.286373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.515 [2024-07-25 14:04:10.286543] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.515 [2024-07-25 14:04:10.286553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.515 [2024-07-25 14:04:10.286562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.515 [2024-07-25 14:04:10.289237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.515 [2024-07-25 14:04:10.298666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.515 [2024-07-25 14:04:10.299123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.515 [2024-07-25 14:04:10.299141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.515 [2024-07-25 14:04:10.299150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.515 [2024-07-25 14:04:10.299320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.515 [2024-07-25 14:04:10.299489] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.515 [2024-07-25 14:04:10.299499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.515 [2024-07-25 14:04:10.299509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.515 [2024-07-25 14:04:10.302179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.515 14:04:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:13.515 14:04:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:36:13.515 [2024-07-25 14:04:10.311616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.515 14:04:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:13.516 [2024-07-25 14:04:10.312144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-07-25 14:04:10.312161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.516 [2024-07-25 14:04:10.312171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.516 14:04:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:13.516 [2024-07-25 14:04:10.312341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.516 [2024-07-25 14:04:10.312511] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.516 [2024-07-25 14:04:10.312521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.516 [2024-07-25 14:04:10.312531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.516 14:04:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:13.516 [2024-07-25 14:04:10.315207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.516 [2024-07-25 14:04:10.324480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.516 [2024-07-25 14:04:10.325014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-07-25 14:04:10.325034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.516 [2024-07-25 14:04:10.325045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.516 [2024-07-25 14:04:10.325216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.516 [2024-07-25 14:04:10.325386] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.516 [2024-07-25 14:04:10.325397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.516 [2024-07-25 14:04:10.325406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.516 [2024-07-25 14:04:10.328074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.516 [2024-07-25 14:04:10.337353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.516 [2024-07-25 14:04:10.337797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-07-25 14:04:10.337816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.516 [2024-07-25 14:04:10.337825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.516 [2024-07-25 14:04:10.337995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.516 [2024-07-25 14:04:10.338165] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.516 [2024-07-25 14:04:10.338176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.516 [2024-07-25 14:04:10.338185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.516 [2024-07-25 14:04:10.340853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.516 [2024-07-25 14:04:10.350281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.516 [2024-07-25 14:04:10.350807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-07-25 14:04:10.350826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.516 [2024-07-25 14:04:10.350838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.516 [2024-07-25 14:04:10.351009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.516 [2024-07-25 14:04:10.351179] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.516 [2024-07-25 14:04:10.351190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.516 [2024-07-25 14:04:10.351199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.516 [2024-07-25 14:04:10.353862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.516 14:04:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:13.516 14:04:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:13.516 14:04:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.516 14:04:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:13.516 [2024-07-25 14:04:10.361936] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:13.516 [2024-07-25 14:04:10.363147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.516 [2024-07-25 14:04:10.363581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-07-25 14:04:10.363598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.516 [2024-07-25 14:04:10.363608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.516 [2024-07-25 14:04:10.363783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.516 [2024-07-25 14:04:10.363952] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.516 [2024-07-25 14:04:10.363962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.516 [2024-07-25 14:04:10.363971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.516 [2024-07-25 14:04:10.366630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.516 [2024-07-25 14:04:10.376070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.516 [2024-07-25 14:04:10.376536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-07-25 14:04:10.376554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.516 [2024-07-25 14:04:10.376564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.516 [2024-07-25 14:04:10.376738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.516 [2024-07-25 14:04:10.376908] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.516 [2024-07-25 14:04:10.376919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.516 [2024-07-25 14:04:10.376931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.516 [2024-07-25 14:04:10.379596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.516 14:04:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.516 14:04:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:13.516 14:04:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.516 14:04:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:13.516 [2024-07-25 14:04:10.389044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.516 [2024-07-25 14:04:10.389531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-07-25 14:04:10.389548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.516 [2024-07-25 14:04:10.389558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.516 [2024-07-25 14:04:10.389737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.516 [2024-07-25 14:04:10.389906] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.516 [2024-07-25 14:04:10.389917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.516 [2024-07-25 14:04:10.389926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.516 [2024-07-25 14:04:10.392592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.777 [2024-07-25 14:04:10.402046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.777 [2024-07-25 14:04:10.402568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.777 [2024-07-25 14:04:10.402587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.777 [2024-07-25 14:04:10.402597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.777 [2024-07-25 14:04:10.402773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.777 [2024-07-25 14:04:10.402945] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.777 [2024-07-25 14:04:10.402955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.777 [2024-07-25 14:04:10.402965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.777 Malloc0 00:36:13.777 14:04:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.777 [2024-07-25 14:04:10.405629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.777 14:04:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:13.777 14:04:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.777 14:04:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:13.777 [2024-07-25 14:04:10.415070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.777 [2024-07-25 14:04:10.415579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.777 [2024-07-25 14:04:10.415596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.777 [2024-07-25 14:04:10.415606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.777 [2024-07-25 14:04:10.415784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.777 [2024-07-25 14:04:10.415954] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.777 [2024-07-25 14:04:10.415964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.777 [2024-07-25 14:04:10.415973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.777 14:04:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.777 14:04:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:13.777 14:04:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.777 14:04:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:13.777 [2024-07-25 14:04:10.418635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.777 14:04:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.777 14:04:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:13.777 14:04:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.777 14:04:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:13.777 [2024-07-25 14:04:10.428076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.777 [2024-07-25 14:04:10.428562] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:13.777 [2024-07-25 14:04:10.428579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.777 [2024-07-25 14:04:10.428596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12940d0 with addr=10.0.0.2, port=4420 00:36:13.777 [2024-07-25 14:04:10.428606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12940d0 is same with the state(5) to be set 00:36:13.777 [2024-07-25 14:04:10.428780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12940d0 (9): Bad file descriptor 00:36:13.777 [2024-07-25 14:04:10.428950] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:13.777 [2024-07-25 14:04:10.428960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:13.777 [2024-07-25 14:04:10.428969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:13.777 [2024-07-25 14:04:10.431635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.777 14:04:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.777 14:04:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 509567 00:36:13.777 [2024-07-25 14:04:10.441071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:13.777 [2024-07-25 14:04:10.469663] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:36:23.754 00:36:23.754 Latency(us) 00:36:23.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:23.754 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:23.754 Verification LBA range: start 0x0 length 0x4000 00:36:23.754 Nvme1n1 : 15.01 8815.77 34.44 13300.22 0.00 5768.72 835.58 17930.65 00:36:23.754 =================================================================================================================== 00:36:23.754 Total : 8815.77 34.44 13300.22 0.00 5768.72 835.58 17930.65 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:23.754 rmmod nvme_tcp 00:36:23.754 rmmod nvme_fabrics 00:36:23.754 rmmod nvme_keyring 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 510565 ']' 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 510565 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 510565 ']' 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 510565 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 510565 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 510565' 00:36:23.754 killing process with pid 510565 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 510565 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 510565 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:23.754 14:04:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:24.692 14:04:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:24.692 00:36:24.692 real 0m27.316s 00:36:24.692 user 1m1.862s 00:36:24.692 sys 0m8.246s 00:36:24.692 14:04:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:24.692 14:04:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:24.692 ************************************ 00:36:24.692 END TEST nvmf_bdevperf 00:36:24.692 ************************************ 00:36:24.692 14:04:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:24.692 14:04:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:24.692 14:04:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:24.692 14:04:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.692 ************************************ 00:36:24.692 START TEST nvmf_target_disconnect 00:36:24.692 ************************************ 00:36:24.692 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:24.951 * Looking for test storage... 00:36:24.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:36:24.951 14:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:31.517 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:31.517 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:31.517 Found net devices under 0000:af:00.0: cvl_0_0 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:31.517 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:31.518 Found net devices under 0000:af:00.1: cvl_0_1 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:31.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:31.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:36:31.518 00:36:31.518 --- 10.0.0.2 ping statistics --- 00:36:31.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:31.518 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:31.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:31.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:36:31.518 00:36:31.518 --- 10.0.0.1 ping statistics --- 00:36:31.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:31.518 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:31.518 14:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:31.518 ************************************ 00:36:31.518 START TEST nvmf_target_disconnect_tc1 00:36:31.518 ************************************ 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:31.518 EAL: No free 2048 kB hugepages reported on node 1 00:36:31.518 [2024-07-25 14:04:28.142919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.518 [2024-07-25 14:04:28.143036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18eca60 with addr=10.0.0.2, port=4420 00:36:31.518 [2024-07-25 14:04:28.143106] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:31.518 [2024-07-25 14:04:28.143158] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:31.518 [2024-07-25 14:04:28.143186] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:36:31.518 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:36:31.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:31.518 Initializing NVMe Controllers 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:31.518 00:36:31.518 real 0m0.121s 00:36:31.518 user 0m0.051s 00:36:31.518 sys 0m0.070s 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:31.518 ************************************ 00:36:31.518 END TEST nvmf_target_disconnect_tc1 00:36:31.518 ************************************ 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:31.518 ************************************ 00:36:31.518 START TEST nvmf_target_disconnect_tc2 00:36:31.518 ************************************ 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=515850 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 515850 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:31.518 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 515850 ']' 00:36:31.519 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:31.519 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:31.519 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:31.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:31.519 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:31.519 14:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:31.519 [2024-07-25 14:04:28.296387] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:36:31.519 [2024-07-25 14:04:28.296434] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:31.519 EAL: No free 2048 kB hugepages reported on node 1 00:36:31.519 [2024-07-25 14:04:28.336668] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:31.519 [2024-07-25 14:04:28.386445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:31.778 [2024-07-25 14:04:28.426636] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:31.778 [2024-07-25 14:04:28.426674] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:31.778 [2024-07-25 14:04:28.426685] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:31.778 [2024-07-25 14:04:28.426694] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:31.778 [2024-07-25 14:04:28.426701] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:31.778 [2024-07-25 14:04:28.427252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:36:31.778 [2024-07-25 14:04:28.427342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:36:31.778 [2024-07-25 14:04:28.427449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:36:31.778 [2024-07-25 14:04:28.427451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:32.346 Malloc0 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:32.346 [2024-07-25 14:04:29.177930] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:32.346 [2024-07-25 14:04:29.210189] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=515889 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:32.346 14:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:32.605 EAL: No free 2048 kB hugepages reported on node 1 00:36:34.517 14:04:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 515850 00:36:34.517 14:04:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 [2024-07-25 14:04:31.239889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 [2024-07-25 14:04:31.240119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Write completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 [2024-07-25 14:04:31.240338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.517 starting I/O failed 00:36:34.517 Read completed with error (sct=0, sc=8) 00:36:34.518 starting I/O failed 00:36:34.518 Read completed with error (sct=0, sc=8) 00:36:34.518 starting I/O failed 00:36:34.518 Write completed with error (sct=0, sc=8) 00:36:34.518 starting I/O failed 00:36:34.518 Write completed with error (sct=0, sc=8) 00:36:34.518 starting I/O failed 00:36:34.518 Read completed with error (sct=0, sc=8) 00:36:34.518 starting I/O failed 00:36:34.518 Read completed with error (sct=0, sc=8) 00:36:34.518 starting I/O failed 00:36:34.518 Write completed with error (sct=0, sc=8) 00:36:34.518 starting I/O failed 00:36:34.518 Write completed with error (sct=0, sc=8) 00:36:34.518 starting I/O failed 00:36:34.518 Write completed with error (sct=0, sc=8) 00:36:34.518 starting I/O failed 00:36:34.518 Write completed with error (sct=0, sc=8) 00:36:34.518 starting I/O failed 00:36:34.518 Read completed with error (sct=0, sc=8) 00:36:34.518 starting I/O failed 00:36:34.518 Read completed with error (sct=0, sc=8) 00:36:34.518 starting I/O failed 00:36:34.518 Write completed with error (sct=0, sc=8) 00:36:34.518 starting I/O failed 00:36:34.518 Write completed with error (sct=0, sc=8) 00:36:34.518 starting I/O failed 00:36:34.518 Read completed with error (sct=0, sc=8) 00:36:34.518 starting I/O failed 00:36:34.518 Write completed with error (sct=0, sc=8) 00:36:34.518 starting I/O failed 00:36:34.518 Write completed with error (sct=0, sc=8) 00:36:34.518 starting I/O failed 00:36:34.518 Read completed with error (sct=0, sc=8) 00:36:34.518 starting I/O failed 00:36:34.518 Write completed with error (sct=0, sc=8) 00:36:34.518 starting I/O failed 00:36:34.518 Write completed with error (sct=0, sc=8) 00:36:34.518 starting I/O failed 00:36:34.518 Read completed with error (sct=0, sc=8) 00:36:34.518 starting I/O failed 00:36:34.518 Read completed with error (sct=0, sc=8) 00:36:34.518 starting I/O failed 00:36:34.518 Read completed with error (sct=0, sc=8) 00:36:34.518 starting I/O failed 00:36:34.518 Read completed with error (sct=0, sc=8) 00:36:34.518 starting I/O failed 00:36:34.518 Read completed with error (sct=0, sc=8) 00:36:34.518 starting I/O failed 00:36:34.518 [2024-07-25 14:04:31.240554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:34.518 [2024-07-25 14:04:31.240827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.240846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.241074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.241087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.241269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.241282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.241572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.241612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.241972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.242014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.242326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.242367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.242651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.242705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.243077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.243091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.243349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.243362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.243586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.243626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.243991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.244033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.244421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.244474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.244752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.244765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.245020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.245033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.245284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.245296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.245681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.245733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.246118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.246159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.246637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.246677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.246956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.246968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.247167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.247179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.247384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.247401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.247738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.247780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.248013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.248053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.248313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.248352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.248652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.248669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.248940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.248958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.249225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.249241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.249616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.518 [2024-07-25 14:04:31.249656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.518 qpair failed and we were unable to recover it. 00:36:34.518 [2024-07-25 14:04:31.250057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.250098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.250467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.250508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.250899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.250940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.251291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.251331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.251733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.251751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.252086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.252103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.252344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.252361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.252718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.252735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.252925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.252942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.253156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.253172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.253508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.253524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.253857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.253875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.254184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.254201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.254489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.254506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.254831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.254848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.255063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.255080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.255336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.255353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.255693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.255752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.256134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.256180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.256570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.256610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.256852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.256894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.257201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.257242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.257582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.257623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.257927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.257969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.258289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.258328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.258675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.258692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.259034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.259052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.259308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.259325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.259679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.259696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.260039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.260075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.260414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.260454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.260763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.260805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.261159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.261200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.261595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.261635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.261986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.262004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.262216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.262233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.262500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.519 [2024-07-25 14:04:31.262517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.519 qpair failed and we were unable to recover it. 00:36:34.519 [2024-07-25 14:04:31.262794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.262810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.263069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.263085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.263398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.263415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.263764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.263810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.264122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.264162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.264537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.264576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.264960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.265002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.265320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.265360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.265729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.265747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.266008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.266048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.266305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.266346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.266569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.266608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.266976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.266994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.267258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.267275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.267690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.267741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.268106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.268147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.268500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.268540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.268934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.268975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.269278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.269319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.269703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.269752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.270142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.270182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.270538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.270583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.270951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.270992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.271361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.271401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.271777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.271825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.272084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.272101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.272376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.272415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.272780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.272822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.273128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.273168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.273559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.273600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.273891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.273933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.274258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.274299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.274613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.274653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.274963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.274980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.275323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.275340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.275536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.275553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.275819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.275837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.520 [2024-07-25 14:04:31.276101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.520 [2024-07-25 14:04:31.276118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.520 qpair failed and we were unable to recover it. 00:36:34.521 [2024-07-25 14:04:31.276386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.521 [2024-07-25 14:04:31.276426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.521 qpair failed and we were unable to recover it. 00:36:34.521 [2024-07-25 14:04:31.276712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.521 [2024-07-25 14:04:31.276766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.521 qpair failed and we were unable to recover it. 00:36:34.521 [2024-07-25 14:04:31.277155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.521 [2024-07-25 14:04:31.277195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.521 qpair failed and we were unable to recover it. 00:36:34.521 [2024-07-25 14:04:31.277511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.521 [2024-07-25 14:04:31.277551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.521 qpair failed and we were unable to recover it. 00:36:34.521 [2024-07-25 14:04:31.277934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.521 [2024-07-25 14:04:31.277975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.521 qpair failed and we were unable to recover it. 00:36:34.521 [2024-07-25 14:04:31.278314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.521 [2024-07-25 14:04:31.278354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.521 qpair failed and we were unable to recover it. 00:36:34.521 [2024-07-25 14:04:31.278712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.521 [2024-07-25 14:04:31.278732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.521 qpair failed and we were unable to recover it. 00:36:34.521 [2024-07-25 14:04:31.279063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.521 [2024-07-25 14:04:31.279080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.521 qpair failed and we were unable to recover it. 00:36:34.521 [2024-07-25 14:04:31.279407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.521 [2024-07-25 14:04:31.279447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.521 qpair failed and we were unable to recover it. 00:36:34.521 [2024-07-25 14:04:31.279780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.521 [2024-07-25 14:04:31.279822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.521 qpair failed and we were unable to recover it. 00:36:34.521 [2024-07-25 14:04:31.280168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.521 [2024-07-25 14:04:31.280209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.521 qpair failed and we were unable to recover it. 00:36:34.521 [2024-07-25 14:04:31.280592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.521 [2024-07-25 14:04:31.280633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.521 qpair failed and we were unable to recover it. 00:36:34.521 [2024-07-25 14:04:31.280875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.521 [2024-07-25 14:04:31.280917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.521 qpair failed and we were unable to recover it. 00:36:34.521 [2024-07-25 14:04:31.281303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.521 [2024-07-25 14:04:31.281343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.521 qpair failed and we were unable to recover it. 00:36:34.521 [2024-07-25 14:04:31.281740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.521 [2024-07-25 14:04:31.281789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.521 qpair failed and we were unable to recover it. 00:36:34.521 [2024-07-25 14:04:31.282114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.521 [2024-07-25 14:04:31.282131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.521 qpair failed and we were unable to recover it. 00:36:34.521 [2024-07-25 14:04:31.282398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.521 [2024-07-25 14:04:31.282415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.521 qpair failed and we were unable to recover it. 00:36:34.521 [2024-07-25 14:04:31.282762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.521 [2024-07-25 14:04:31.282779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.521 qpair failed and we were unable to recover it. 00:36:34.521 [2024-07-25 14:04:31.283111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.521 [2024-07-25 14:04:31.283128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.521 qpair failed and we were unable to recover it. 00:36:34.521 [2024-07-25 14:04:31.283425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.521 [2024-07-25 14:04:31.283442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.521 qpair failed and we were unable to recover it. 00:36:34.521 [2024-07-25 14:04:31.283785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.521 [2024-07-25 14:04:31.283826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.521 qpair failed and we were unable to recover it. 00:36:34.521 [2024-07-25 14:04:31.284211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.521 [2024-07-25 14:04:31.284251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.521 qpair failed and we were unable to recover it. 00:36:34.521 [2024-07-25 14:04:31.284566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.521 [2024-07-25 14:04:31.284606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.521 qpair failed and we were unable to recover it. 00:36:34.521 [2024-07-25 14:04:31.285015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.521 [2024-07-25 14:04:31.285062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.521 qpair failed and we were unable to recover it. 00:36:34.521 [2024-07-25 14:04:31.285450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.521 [2024-07-25 14:04:31.285490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.521 qpair failed and we were unable to recover it. 00:36:34.521 [2024-07-25 14:04:31.285822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.521 [2024-07-25 14:04:31.285863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.521 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.286175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.286192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.286547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.286587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.286942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.286983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.287305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.287345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.287661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.287701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.288032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.288073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.288368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.288408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.288784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.288802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.289094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.289111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.289471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.289510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.289811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.289853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.290231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.290271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.290654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.290694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.290944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.290961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.291143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.291160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.291470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.291511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.291877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.291919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.292304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.292344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.292628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.292645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.292907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.292925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.293179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.293196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.293468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.293485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.293824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.293842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.294188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.294205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.294556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.294596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.294962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.295004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.295389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.295428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.295807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.295824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.296186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.296226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.296663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.296704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.297080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.297121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.297509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.297549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.297858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.297875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.298197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.298213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.298546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.298587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.298974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.299016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.522 [2024-07-25 14:04:31.299408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.522 [2024-07-25 14:04:31.299448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.522 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.299829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.299876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.300184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.300200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.300439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.300456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.300722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.300740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.301073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.301090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.301369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.301409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.301703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.301754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.302117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.302154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.302510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.302550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.302958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.302999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.303387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.303426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.303822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.303863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.304247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.304287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.304671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.304711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.305132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.305173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.305545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.305585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.305953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.305996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.306359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.306399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.306807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.306849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.307164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.307204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.307622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.307662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.307909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.307926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.308223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.308263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.308650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.308690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.308967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.308984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.309255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.309272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.309649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.309666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.310001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.310043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.310430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.310470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.310849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.310890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.311282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.311322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.311617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.311656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.311918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.311936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.312260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.312277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.523 qpair failed and we were unable to recover it. 00:36:34.523 [2024-07-25 14:04:31.312636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.523 [2024-07-25 14:04:31.312653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.312968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.312986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.313256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.313273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.313464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.313481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.313683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.313700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.314072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.314107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.314374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.314420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.314810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.314851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.315150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.315190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.315565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.315605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.315922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.315939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.316251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.316267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.316473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.316490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.316828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.316845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.317108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.317125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.317404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.317421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.317680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.317696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.318019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.318036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.318233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.318249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.318585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.318602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.318946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.318964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.319232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.319271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.319666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.319706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.320006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.320023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.320265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.320281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.320635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.320675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.321124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.321204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.321645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.321691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.322097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.322140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.322508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.322549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.322937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.322955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.323209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.323225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.323546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.323586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.323993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.324035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.324352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.324391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.324793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.324834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.325148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.325188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.524 qpair failed and we were unable to recover it. 00:36:34.524 [2024-07-25 14:04:31.325567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.524 [2024-07-25 14:04:31.325607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.325997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.326038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.326372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.326412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.326773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.326814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.327138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.327177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.327543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.327583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.327958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.327999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.328364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.328405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.328796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.328837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.329148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.329167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.329527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.329567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.329866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.329907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.330295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.330335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.330706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.330726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.331064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.331081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.331347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.331364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.331644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.331661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.332004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.332021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.332331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.332348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.332675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.332723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.333044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.333085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.333452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.333492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.333851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.333869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.334210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.334250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.334607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.334647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.334968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.335009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.335338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.335378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.335647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.335687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.336033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.336075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.336389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.336428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.336793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.336834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.337130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.337171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.337572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.337612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.338006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.338047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.338410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.338449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.338840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.338881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.339211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.339251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.339640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.525 [2024-07-25 14:04:31.339680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.525 qpair failed and we were unable to recover it. 00:36:34.525 [2024-07-25 14:04:31.340069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.526 [2024-07-25 14:04:31.340110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.526 qpair failed and we were unable to recover it. 00:36:34.526 [2024-07-25 14:04:31.340491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.526 [2024-07-25 14:04:31.340532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.526 qpair failed and we were unable to recover it. 00:36:34.526 [2024-07-25 14:04:31.340906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.526 [2024-07-25 14:04:31.340947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.526 qpair failed and we were unable to recover it. 00:36:34.526 [2024-07-25 14:04:31.341315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.526 [2024-07-25 14:04:31.341355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.526 qpair failed and we were unable to recover it. 00:36:34.526 [2024-07-25 14:04:31.341749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.526 [2024-07-25 14:04:31.341791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.526 qpair failed and we were unable to recover it. 00:36:34.526 [2024-07-25 14:04:31.342180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.526 [2024-07-25 14:04:31.342220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.526 qpair failed and we were unable to recover it. 00:36:34.526 [2024-07-25 14:04:31.342515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.526 [2024-07-25 14:04:31.342555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.526 qpair failed and we were unable to recover it. 00:36:34.526 [2024-07-25 14:04:31.342903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.526 [2024-07-25 14:04:31.342942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.526 qpair failed and we were unable to recover it. 00:36:34.526 [2024-07-25 14:04:31.343273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.526 [2024-07-25 14:04:31.343313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.526 qpair failed and we were unable to recover it. 00:36:34.526 [2024-07-25 14:04:31.343711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.526 [2024-07-25 14:04:31.343761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.526 qpair failed and we were unable to recover it. 00:36:34.526 [2024-07-25 14:04:31.344057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.526 [2024-07-25 14:04:31.344097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.526 qpair failed and we were unable to recover it. 00:36:34.526 [2024-07-25 14:04:31.344539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.526 [2024-07-25 14:04:31.344584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.526 qpair failed and we were unable to recover it. 00:36:34.526 [2024-07-25 14:04:31.344882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.526 [2024-07-25 14:04:31.344924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.526 qpair failed and we were unable to recover it. 00:36:34.526 [2024-07-25 14:04:31.345274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.526 [2024-07-25 14:04:31.345319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.526 qpair failed and we were unable to recover it. 00:36:34.526 [2024-07-25 14:04:31.345568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.526 [2024-07-25 14:04:31.345608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.526 qpair failed and we were unable to recover it. 00:36:34.526 [2024-07-25 14:04:31.345858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.526 [2024-07-25 14:04:31.345899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.526 qpair failed and we were unable to recover it. 00:36:34.526 [2024-07-25 14:04:31.346214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.526 [2024-07-25 14:04:31.346254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.526 qpair failed and we were unable to recover it. 00:36:34.526 [2024-07-25 14:04:31.346572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.526 [2024-07-25 14:04:31.346612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.526 qpair failed and we were unable to recover it. 00:36:34.526 [2024-07-25 14:04:31.346902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.526 [2024-07-25 14:04:31.346943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.526 qpair failed and we were unable to recover it. 00:36:34.526 [2024-07-25 14:04:31.347309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.526 [2024-07-25 14:04:31.347326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.526 qpair failed and we were unable to recover it. 00:36:34.526 [2024-07-25 14:04:31.347685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.526 [2024-07-25 14:04:31.347702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.526 qpair failed and we were unable to recover it. 00:36:34.526 [2024-07-25 14:04:31.348046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.526 [2024-07-25 14:04:31.348088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.526 qpair failed and we were unable to recover it. 00:36:34.526 [2024-07-25 14:04:31.348448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.527 [2024-07-25 14:04:31.348488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.527 qpair failed and we were unable to recover it. 00:36:34.527 [2024-07-25 14:04:31.348887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.527 [2024-07-25 14:04:31.348930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.527 qpair failed and we were unable to recover it. 00:36:34.527 [2024-07-25 14:04:31.349298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.527 [2024-07-25 14:04:31.349337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.527 qpair failed and we were unable to recover it. 00:36:34.527 [2024-07-25 14:04:31.349730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.527 [2024-07-25 14:04:31.349771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.527 qpair failed and we were unable to recover it. 00:36:34.527 [2024-07-25 14:04:31.350158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.527 [2024-07-25 14:04:31.350199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.527 qpair failed and we were unable to recover it. 00:36:34.527 [2024-07-25 14:04:31.350533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.527 [2024-07-25 14:04:31.350574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.527 qpair failed and we were unable to recover it. 00:36:34.527 [2024-07-25 14:04:31.350876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.527 [2024-07-25 14:04:31.350894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.527 qpair failed and we were unable to recover it. 00:36:34.527 [2024-07-25 14:04:31.351162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.527 [2024-07-25 14:04:31.351179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.527 qpair failed and we were unable to recover it. 00:36:34.527 [2024-07-25 14:04:31.351512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.527 [2024-07-25 14:04:31.351529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.527 qpair failed and we were unable to recover it. 00:36:34.527 [2024-07-25 14:04:31.351874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.527 [2024-07-25 14:04:31.351891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.527 qpair failed and we were unable to recover it. 00:36:34.527 [2024-07-25 14:04:31.352205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.527 [2024-07-25 14:04:31.352222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.527 qpair failed and we were unable to recover it. 00:36:34.527 [2024-07-25 14:04:31.352610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.527 [2024-07-25 14:04:31.352651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.527 qpair failed and we were unable to recover it. 00:36:34.527 [2024-07-25 14:04:31.352919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.527 [2024-07-25 14:04:31.352960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.527 qpair failed and we were unable to recover it. 00:36:34.527 [2024-07-25 14:04:31.353232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.527 [2024-07-25 14:04:31.353249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.527 qpair failed and we were unable to recover it. 00:36:34.527 [2024-07-25 14:04:31.353455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.527 [2024-07-25 14:04:31.353471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.527 qpair failed and we were unable to recover it. 00:36:34.527 [2024-07-25 14:04:31.353785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.527 [2024-07-25 14:04:31.353803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.527 qpair failed and we were unable to recover it. 00:36:34.527 [2024-07-25 14:04:31.354143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.527 [2024-07-25 14:04:31.354159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.527 qpair failed and we were unable to recover it. 00:36:34.527 [2024-07-25 14:04:31.354562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.354601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.354989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.355007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.355225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.355242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.355581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.355598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.355774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.355791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.356128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.356167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.356599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.356639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.356972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.357012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.357400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.357439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.357830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.357872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.358154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.358171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.358557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.358598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.358835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.358882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.359254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.359294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.359687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.359751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.360050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.360068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.360332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.360349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.360709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.360759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.361077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.361093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.361308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.361325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.361627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.361643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.361961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.361979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.362250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.362268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.362515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.362533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.362795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.362813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.363005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.363022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.363231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.363247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.363551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.363569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.363834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.363852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.364167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.364184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.364596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.364636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.365017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.365059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.365457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.365497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.365811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.365873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.366191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.366230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.366646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.528 [2024-07-25 14:04:31.366686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.528 qpair failed and we were unable to recover it. 00:36:34.528 [2024-07-25 14:04:31.367024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.367041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.367376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.367393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.367735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.367777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.368081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.368121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.368465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.368506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.368900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.368941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.369335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.369375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.369757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.369774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.370045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.370085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.370499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.370539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.370851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.370868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.371135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.371152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.371531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.371571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.371955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.371997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.372357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.372397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.372792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.372834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.373100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.373146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.373566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.373606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.373909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.373927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.374109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.374125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.374395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.374435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.374831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.374848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.375044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.375095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.375411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.375451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.375785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.375827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.376093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.376133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.376451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.376490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.376877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.376918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.377283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.377324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.377692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.377745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.378018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.378035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.378211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.378228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.378563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.378580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.378844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.378862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.379141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.379158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.379411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.379427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.379764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.529 [2024-07-25 14:04:31.379782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.529 qpair failed and we were unable to recover it. 00:36:34.529 [2024-07-25 14:04:31.380111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.380128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.380364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.380405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.380800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.380842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.381138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.381155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.381372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.381389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.381743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.381760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.382069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.382149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.382433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.382477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.382792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.382806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.383014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.383027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.383334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.383346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.383595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.383607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.383939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.383980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.384369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.384409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.384801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.384843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.385209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.385249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.385638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.385678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.386060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.386100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.386410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.386449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.386813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.386863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.387250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.387291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.387603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.387643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.388047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.388060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.388387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.388400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.388711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.388727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.388912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.388925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.389106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.389118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.389424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.389436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.389770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.389783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.390055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.390095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.390511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.390551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.390945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.390987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.391251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.391291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.391602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.391642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.392058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.392071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.392307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.392320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.392601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.392614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.530 [2024-07-25 14:04:31.392819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.530 [2024-07-25 14:04:31.392832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.530 qpair failed and we were unable to recover it. 00:36:34.531 [2024-07-25 14:04:31.393159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.531 [2024-07-25 14:04:31.393172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.531 qpair failed and we were unable to recover it. 00:36:34.531 [2024-07-25 14:04:31.393501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.531 [2024-07-25 14:04:31.393513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.531 qpair failed and we were unable to recover it. 00:36:34.531 [2024-07-25 14:04:31.393821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.531 [2024-07-25 14:04:31.393834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.531 qpair failed and we were unable to recover it. 00:36:34.531 [2024-07-25 14:04:31.394035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.531 [2024-07-25 14:04:31.394047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.531 qpair failed and we were unable to recover it. 00:36:34.531 [2024-07-25 14:04:31.394371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.531 [2024-07-25 14:04:31.394383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.531 qpair failed and we were unable to recover it. 00:36:34.531 [2024-07-25 14:04:31.394683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.531 [2024-07-25 14:04:31.394696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.531 qpair failed and we were unable to recover it. 00:36:34.531 [2024-07-25 14:04:31.394944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.531 [2024-07-25 14:04:31.394957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.531 qpair failed and we were unable to recover it. 00:36:34.531 [2024-07-25 14:04:31.395208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.531 [2024-07-25 14:04:31.395221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.531 qpair failed and we were unable to recover it. 00:36:34.531 [2024-07-25 14:04:31.395423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.531 [2024-07-25 14:04:31.395436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.531 qpair failed and we were unable to recover it. 00:36:34.531 [2024-07-25 14:04:31.395765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.531 [2024-07-25 14:04:31.395806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.531 qpair failed and we were unable to recover it. 00:36:34.531 [2024-07-25 14:04:31.396217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.531 [2024-07-25 14:04:31.396256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.531 qpair failed and we were unable to recover it. 00:36:34.531 [2024-07-25 14:04:31.396646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.531 [2024-07-25 14:04:31.396685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.531 qpair failed and we were unable to recover it. 00:36:34.531 [2024-07-25 14:04:31.397005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.531 [2024-07-25 14:04:31.397045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.531 qpair failed and we were unable to recover it. 00:36:34.531 [2024-07-25 14:04:31.397349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.531 [2024-07-25 14:04:31.397388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.531 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.397777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.397818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.398157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.398197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.398521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.398561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.398933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.398974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.399277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.399289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.399630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.399642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.399915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.399928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.400253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.400270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.400573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.400586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.400933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.400946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.401217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.401256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.401652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.401693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.402039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.402052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.402382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.402422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.402810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.402851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.403169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.403209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.403528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.403568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.403935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.403976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.404277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.404289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.404700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.404713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.404951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.404964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.405144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.405156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.405368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.405407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.405814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.405855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.406072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.406085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.406362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.406374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.406606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.406618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.406939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.811 [2024-07-25 14:04:31.406952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.811 qpair failed and we were unable to recover it. 00:36:34.811 [2024-07-25 14:04:31.407263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.407276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.407606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.407646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.408035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.408076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.408429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.408470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.408796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.408838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.409201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.409241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.409641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.409681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.410087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.410129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.410453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.410493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.410816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.410858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.411170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.411210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.411564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.411604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.411956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.411997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.412256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.412296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.412687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.412737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.413103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.413143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.413456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.413496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.413863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.413906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.414210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.414222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.414412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.414427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.414727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.414741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.414974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.414987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.415296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.415308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.415657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.415696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.415965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.416006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.416335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.416375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.416708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.416755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.417000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.417041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.417338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.417378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.417751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.417793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.418125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.418138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.418454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.418466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.418794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.418836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.419189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.419229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.419565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.419605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.419907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.419948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.420264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.420305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.812 qpair failed and we were unable to recover it. 00:36:34.812 [2024-07-25 14:04:31.420698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.812 [2024-07-25 14:04:31.420746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.421056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.421096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.421477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.421518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.421935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.421977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.422279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.422292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.422615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.422628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.422984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.423025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.423400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.423440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.423850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.423891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.424142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.424154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.424418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.424430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.424680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.424694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.425035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.425048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.425302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.425315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.425597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.425609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.425863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.425876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.426151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.426163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.426420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.426432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.426772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.426785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.427134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.427146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.427423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.427436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.427671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.427684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.427996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.428008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.428244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.428257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.428528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.428540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.428825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.428838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.429167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.429179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.429507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.429547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.429915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.429957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.430323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.430336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.430664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.430704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.431056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.431096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.431400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.431412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.431666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.431678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.432017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.432030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.432331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.432345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.432660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.432673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.432857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.813 [2024-07-25 14:04:31.432870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.813 qpair failed and we were unable to recover it. 00:36:34.813 [2024-07-25 14:04:31.433050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.433062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.433386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.433398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.433742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.433755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.434125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.434166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.434561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.434602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.435010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.435051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.435416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.435428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.435762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.435803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.436080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.436120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.436493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.436505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.436893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.436934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.437312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.437327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.437576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.437589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.437945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.437986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.438339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.438379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.438756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.438797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.439118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.439158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.439509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.439549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.439911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.439952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.440272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.440312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.440618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.440659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.441032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.441072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.441448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.441488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.441801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.441842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.442108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.442120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.442445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.442458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.442805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.442835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.443129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.443169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.443508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.443548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.443934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.443975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.444283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.444324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.444726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.444768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.445093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.445133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.445534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.445574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.445946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.445988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.446238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.446250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.446519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.814 [2024-07-25 14:04:31.446531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.814 qpair failed and we were unable to recover it. 00:36:34.814 [2024-07-25 14:04:31.446802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.446815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.447141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.447154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.447407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.447419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.447690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.447702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.447884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.447897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.448160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.448201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.448537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.448576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.448962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.449003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.449393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.449405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.449730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.449743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.450086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.450126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.450496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.450536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.450848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.450890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.451213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.451253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.451572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.451617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.452005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.452046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.452451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.452491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.452901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.452941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.453260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.453299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.453682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.453734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.454012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.454025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.454233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.454245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.454444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.454456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.454713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.454765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.455082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.455122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.455337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.455348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.455668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.455680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.455876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.455889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.456097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.456109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.456390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.456402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.456643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.815 [2024-07-25 14:04:31.456656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.815 qpair failed and we were unable to recover it. 00:36:34.815 [2024-07-25 14:04:31.456955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.456968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.457215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.457229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.457546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.457559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.457927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.457940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.458141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.458154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.458420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.458459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.458827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.458869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.459187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.459227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.459558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.459598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.459963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.460004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.460323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.460364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.460750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.460791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.461184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.461223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.461607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.461647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.461983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.462026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.462355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.462367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.462717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.462730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.463009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.463049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.463420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.463459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.463831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.463871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.464235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.464275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.464610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.464650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.465031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.465072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.465372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.465418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.465750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.465792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.466107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.466148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.466416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.466456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.466852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.466865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.467066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.467079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.467331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.467343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.467610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.467622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.467958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.467999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.468342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.468382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.468699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.468747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.469007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.469047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.469452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.469492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.469864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.469923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.816 [2024-07-25 14:04:31.470327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.816 [2024-07-25 14:04:31.470367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.816 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.470686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.470735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.471092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.471132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.471514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.471554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.471923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.471965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.472337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.472377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.472693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.472741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.473133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.473174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.473429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.473469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.473806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.473842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.474123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.474136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.474391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.474403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.474587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.474599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.474850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.474863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.475244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.475285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.475620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.475660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.476017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.476058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.476399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.476439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.476834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.476876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.477190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.477231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.477575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.477615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.477869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.477911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.478295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.478307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.478632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.478644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.478947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.478988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.479303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.479343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.479731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.479777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.480045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.480057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.480308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.480320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.480620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.480632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.480942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.480954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.481219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.481232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.481538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.481552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.481863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.481876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.482064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.482076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.482275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.482287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.482560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.482572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.482900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.817 [2024-07-25 14:04:31.482921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.817 qpair failed and we were unable to recover it. 00:36:34.817 [2024-07-25 14:04:31.483234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.483246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.483590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.483602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.483856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.483893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.484235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.484275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.484591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.484630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.484997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.485038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.485263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.485276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.485635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.485647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.485886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.485932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.486276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.486316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.486575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.486615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.486945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.486986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.487295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.487307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.487574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.487587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.487909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.487921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.488129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.488141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.488341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.488353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.488699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.488745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.489061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.489101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.489505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.489546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.489935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.489978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.490252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.490292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.490690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.490742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.491057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.491097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.491409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.491421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.491677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.491690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.492017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.492030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.492327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.492341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.492586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.492601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.492933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.492946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.493221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.493234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.493527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.493539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.493788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.493801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.494010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.494023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.494350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.494363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.494607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.494619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.494938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.494951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.495212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.818 [2024-07-25 14:04:31.495225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.818 qpair failed and we were unable to recover it. 00:36:34.818 [2024-07-25 14:04:31.495472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.495484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.495797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.495810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.496065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.496097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.496416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.496456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.496783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.496825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.497105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.497144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.497440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.497453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.497708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.497732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.498045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.498058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.498260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.498273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.498586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.498599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.498941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.498954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.499232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.499246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.499432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.499445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.499796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.499809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.500066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.500079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.500290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.500302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.500581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.500621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.501012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.501065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.501357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.501370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.501641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.501655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.501967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.501981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.502287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.502300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.502488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.502501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.502859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.502873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.503201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.503214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.503549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.503561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.503866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.503879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.504133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.504146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.504385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.504398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.504665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.504679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.504999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.505012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.505299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.505311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.505548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.505561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.505773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.505786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.505998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.506011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.506270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.819 [2024-07-25 14:04:31.506283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.819 qpair failed and we were unable to recover it. 00:36:34.819 [2024-07-25 14:04:31.506457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.506470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.506806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.506847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.507101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.507141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.507529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.507542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.507942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.507955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.508255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.508268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.508522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.508536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.508839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.508852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.509071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.509084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.509403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.509416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.509616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.509629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.509813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.509826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.510157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.510170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.510382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.510395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.510681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.510694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.511027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.511068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.511405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.511445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.511771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.511813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.512175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.512215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.512660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.512700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.512994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.513035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.513380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.513393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.513731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.513744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.514029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.514042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.514314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.514327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.514588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.514601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.514847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.514860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.515115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.515128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.515367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.515379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.515689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.515701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.515899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.515911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.516253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.516266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.820 [2024-07-25 14:04:31.516545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.820 [2024-07-25 14:04:31.516557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.820 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.516863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.516878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.517092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.517105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.517392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.517405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.517740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.517753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.518011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.518024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.518211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.518223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.518510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.518523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.518807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.518820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.519097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.519109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.519359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.519372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.519676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.519690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.519962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.519975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.520294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.520307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.520635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.520647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.520962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.520975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.521235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.521247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.521526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.521538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.521875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.521889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.522163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.522176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.522491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.522504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.522830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.522843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.523098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.523111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.523416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.523428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.523711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.523728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.524031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.524044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.524242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.524256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.524494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.524506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.524815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.524828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.525081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.525094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.525297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.525310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.525634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.525646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.525982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.525995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.526324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.526338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.526645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.526658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.526956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.526970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.527232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.527245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.527497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.821 [2024-07-25 14:04:31.527510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.821 qpair failed and we were unable to recover it. 00:36:34.821 [2024-07-25 14:04:31.527713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.527730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.527991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.528004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.528264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.528277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.528603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.528618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.528960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.528973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.529229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.529241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.529573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.529586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.529894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.529907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.530171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.530183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.530442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.530454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.530730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.530743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.531051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.531063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.531318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.531331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.531599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.531612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.531933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.531947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.532266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.532307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.532679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.532731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.533072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.533112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.533535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.533575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.533944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.533985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.534330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.534370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.534703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.534719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.535002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.535015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.535276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.535316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.535613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.535653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.535933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.535974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.536277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.536289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.536620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.536660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.536987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.537028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.537390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.537403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.537739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.537752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.538003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.538015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.538342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.538355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.538625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.538638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.538914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.538927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.539259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.539271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.539542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.539583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.539884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.822 [2024-07-25 14:04:31.539926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.822 qpair failed and we were unable to recover it. 00:36:34.822 [2024-07-25 14:04:31.540221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.540234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.540508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.540520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.540866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.540879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.541195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.541207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.541561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.541602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.541867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.541914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.542259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.542292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.542573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.542586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.542910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.542923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.543179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.543191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.543515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.543527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.543781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.543794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.543996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.544008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.544201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.544213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.544534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.544548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.544799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.544812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.545154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.545166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.545495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.545507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.545702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.545726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.546053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.546066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.546348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.546388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.546702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.546757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.547122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.547162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.547576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.547590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.547936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.547950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.548207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.548219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.548546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.548585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.548905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.548947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.549307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.549320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.549627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.549640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.549894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.549907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.550175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.550188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.550425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.550437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.550763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.550775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.551106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.551119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.551325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.551337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.551629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.823 [2024-07-25 14:04:31.551641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.823 qpair failed and we were unable to recover it. 00:36:34.823 [2024-07-25 14:04:31.551965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.551978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.552314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.552327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.552660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.552673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.553000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.553013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.553198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.553210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.553395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.553407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.553733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.553746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.553954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.553967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.554218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.554232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.554618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.554658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.555023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.555075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.555339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.555352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.555676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.555688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.555911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.555924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.556168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.556180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.556416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.556455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.556811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.556852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.557097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.557137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.557510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.557541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.557865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.557878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.558120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.558132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.558336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.558349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.558597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.558610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.558873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.558914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.559244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.559283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.559605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.559617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.559910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.559922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.560237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.560250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.560555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.560567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.560813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.560826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.561067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.561079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.561339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.561351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.561595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.561608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.561929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.561942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.562136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.562148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.562324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.562336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.562519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.562532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.824 [2024-07-25 14:04:31.562769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.824 [2024-07-25 14:04:31.562811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.824 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.563228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.563267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.563528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.563540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.563868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.563881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.564220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.564233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.564473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.564485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.564752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.564765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.565023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.565035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.565336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.565349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.565692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.565764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.566097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.566137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.566531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.566545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.566871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.566884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.567203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.567215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.567543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.567583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.567946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.567987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.568372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.568413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.568818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.568831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.569094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.569106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.569355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.569395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.569734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.569775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.570117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.570152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.570404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.570416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.570754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.570767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.571072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.571085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.571350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.571363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.571681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.571693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.572031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.572073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.572384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.572423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.572801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.572814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.573128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.573140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.573443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.573455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.825 [2024-07-25 14:04:31.573781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.825 [2024-07-25 14:04:31.573799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.825 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.574046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.574058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.574388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.574428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.574810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.574852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.575216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.575255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.575627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.575639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.576000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.576014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.576281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.576293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.576681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.576694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.576980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.576992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.577292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.577304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.577576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.577615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.577927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.577970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.578323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.578335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.578690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.578704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.578987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.579000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.579217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.579229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.579562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.579575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.579899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.579913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.580237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.580251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.580525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.580537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.580863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.580875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.581199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.581211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.581478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.581491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.581754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.581767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.581960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.581972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.582271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.582283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.582620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.582632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.582957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.582970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.583338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.583350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.583678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.583690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.583987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.584027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.584387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.584424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.584745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.584757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.584955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.584967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.585307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.585319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.585559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.585571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.585902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.585943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-07-25 14:04:31.586273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.826 [2024-07-25 14:04:31.586313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.586668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.586680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.586934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.586946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.587137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.587150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.587397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.587409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.587637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.587649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.587909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.587921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.588182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.588194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.588444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.588456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.588803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.588816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.589069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.589081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.589363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.589375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.589710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.589727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.589983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.589995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.590296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.590308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.590604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.590616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.590928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.590940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.591192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.591204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.591526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.591538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.591859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.591872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.592105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.592117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.592375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.592388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.592689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.592701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.593028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.593041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.593361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.593373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.593709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.593728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.593975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.593987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.594307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.594319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.594687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.594699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.594941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.594954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.595226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.595238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.595543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.595582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.595973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.596015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.596394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.596406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.596656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.596668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.596965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.597007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.597272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.597311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.597693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.827 [2024-07-25 14:04:31.597743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-07-25 14:04:31.598084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.598123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.598555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.598598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.598864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.598877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.599151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.599163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.599346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.599358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.599681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.599730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.600126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.600166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.600554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.600594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.600881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.600894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.601136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.601148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.601399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.601413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.601712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.601733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.601921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.601933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.602252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.602264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.602605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.602618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.602928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.602940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.603211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.603223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.603483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.603495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.603831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.603844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.604095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.604107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.604309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.604321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.604663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.604675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.604932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.604945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.605195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.605207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.605472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.605484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.605789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.605801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.606030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.606042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.606364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.606377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.606555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.606568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.606881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.606894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.607215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.607255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.607643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.607682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.608057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.608097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.608388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.608427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.608788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.608830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.609189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.609229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.609501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.828 [2024-07-25 14:04:31.609541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-07-25 14:04:31.609938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.609979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.610269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.610281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.610527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.610539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.610864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.610876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.611149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.611161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.611500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.611539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.611937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.611978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.612317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.612353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.612657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.612669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.612913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.612926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.613153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.613166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.613453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.613493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.613876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.613917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.614275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.614320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.614684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.614732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.615117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.615156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.615469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.615508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.615824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.615836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.616180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.616192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.616436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.616448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.616777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.616789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.617156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.617169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.617343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.617365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.617698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.617763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.618123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.618163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.618545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.618585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.618964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.619006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.619341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.619380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.619699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.619711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.619958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.619980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.620301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.620324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.620672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.620712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.621108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.829 [2024-07-25 14:04:31.621148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.829 qpair failed and we were unable to recover it. 00:36:34.829 [2024-07-25 14:04:31.621457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.621496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.621879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.621921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.622283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.622322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.622712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.622759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.623092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.623132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.623515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.623554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.623849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.623890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.624210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.624250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.624632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.624671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.624961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.624973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.625355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.625367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.625674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.625741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.626101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.626142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.626524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.626563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.626947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.626988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.627374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.627413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.627802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.627842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.628201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.628241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.628632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.628671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.629065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.629105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.629486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.629531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.629835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.629877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.630262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.630302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.630689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.630741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.631050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.631090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.631442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.631454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.631778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.631790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.632113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.632125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.632457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.632497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.632881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.632922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.633236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.633276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.633659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.633708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.634097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.634137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.634519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.634558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.634936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.634949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.635269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.635282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.635627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.635667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.830 [2024-07-25 14:04:31.636062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.830 [2024-07-25 14:04:31.636103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.830 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.636431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.636470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.636860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.636872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.637169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.637181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.637500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.637539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.637855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.637896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.638200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.638239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.638571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.638583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.638856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.638868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.639233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.639273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.639664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.639705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.640099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.640140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.640447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.640486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.640867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.640908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.641296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.641335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.641748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.641790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.642160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.642200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.642582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.642622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.642985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.643026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.643392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.643432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.643817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.643859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.644242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.644282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.644663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.644703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.645076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.645122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.645484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.645523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.645912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.645953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.646269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.646308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.646692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.646741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.647101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.647140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.647523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.647562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.647923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.647964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.648270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.648310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.648642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.648654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.648950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.648990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.649381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.649421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.649807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.649849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.650235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.650284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.650460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.831 [2024-07-25 14:04:31.650472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.831 qpair failed and we were unable to recover it. 00:36:34.831 [2024-07-25 14:04:31.650803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.650844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.651237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.651277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.651636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.651676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.652003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.652043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.652430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.652470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.652798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.652838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.653224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.653264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.653580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.653620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.654012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.654053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.654439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.654479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.654834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.654869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.655269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.655308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.655632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.655644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.655991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.656003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.656246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.656258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.656521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.656533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.656860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.656872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.657120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.657132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.657401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.657413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.657647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.657660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.657981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.657994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.658218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.658230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.658476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.658488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.658790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.658803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.659113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.659153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.659540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.659585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.659866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.659878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.660208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.660220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.660534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.660545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.660812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.660825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.661086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.661098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.661359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.661371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.661629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.661641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.661948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.661961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.662259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.662271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.662593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.662605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.662929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.662941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.832 [2024-07-25 14:04:31.663281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.832 [2024-07-25 14:04:31.663321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.832 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.663703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.663751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.663994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.664007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.664345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.664358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.664707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.664755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.665140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.665179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.665562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.665574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.665899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.665912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.666169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.666208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.666616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.666655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.666998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.667010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.667402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.667441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.667838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.667879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.668267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.668306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.668696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.668745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.669073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.669113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.669496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.669535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.669865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.669878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.670185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.670225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.670487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.670499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.670828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.670840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.671092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.671104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.671425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.671437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.671765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.671777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.672138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.672150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.672417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.672430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.672692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.672742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.673150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.673190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.673572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.673617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.673972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.673984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.674328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.674340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.674596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.674609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.674937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.674978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.675357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.675397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.675761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.675774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.676086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.676125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.676520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.676560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.676930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.833 [2024-07-25 14:04:31.676971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.833 qpair failed and we were unable to recover it. 00:36:34.833 [2024-07-25 14:04:31.677382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.834 [2024-07-25 14:04:31.677421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.834 qpair failed and we were unable to recover it. 00:36:34.834 [2024-07-25 14:04:31.677788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.834 [2024-07-25 14:04:31.677806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.834 qpair failed and we were unable to recover it. 00:36:34.834 [2024-07-25 14:04:31.678057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.834 [2024-07-25 14:04:31.678070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:34.834 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.678445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.678485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.678841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.678883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.679145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.679185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.679510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.679550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.679884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.679896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.680126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.680138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.680420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.680460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.680841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.680882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.681242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.681282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.681600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.681640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.681976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.682018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.682405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.682445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.682771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.682784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.682985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.683025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.683356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.683396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.683656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.683668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.683923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.683936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.684207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.684219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.684505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.684517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.684769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.684782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.685026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.685038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.685354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.685367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.685688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.685700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.685915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.685928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.686192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.686204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.686433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.686445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.686765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.686792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.687109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.687154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.687538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.687577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.687900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.687941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.688279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.106 [2024-07-25 14:04:31.688319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.106 qpair failed and we were unable to recover it. 00:36:35.106 [2024-07-25 14:04:31.688613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.688653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.689049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.689090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.689475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.689514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.689894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.689935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.690320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.690360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.690734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.690775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.691075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.691087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.691384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.691396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.691697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.691709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.691987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.692027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.692329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.692369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.692687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.692738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.693074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.693114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.693520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.693559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.693871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.693912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.694170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.694210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.694591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.694603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.694900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.694913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.695234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.695246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.695475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.695487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.695841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.695854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.696139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.696151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.696473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.696485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.696820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.696861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.697247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.697287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.697662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.697674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.697994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.698006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.698272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.698284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.698537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.698550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.698867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.698880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.699187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.699199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.699528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.699568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.699945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.699987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.700352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.700391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.700778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.700820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.701192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.701232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.701553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.107 [2024-07-25 14:04:31.701566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.107 qpair failed and we were unable to recover it. 00:36:35.107 [2024-07-25 14:04:31.701809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.701822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.702069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.702082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.702404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.702416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.702744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.702785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.703170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.703209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.703512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.703551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.703892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.703933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.704236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.704275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.704588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.704627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.705012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.705054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.705359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.705398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.705776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.705817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.706202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.706242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.706541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.706581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.706948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.706989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.707351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.707391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.707690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.707702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.707960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.707972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.708227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.708238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.708536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.708548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.708780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.708792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.709134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.709156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.709453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.709465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.709796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.709809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.710130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.710143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.710482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.710513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.710901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.710943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.711303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.711343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.711655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.711695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.711993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.712033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.712415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.712455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.712743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.712784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.713156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.713168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.713541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.713581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.713965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.714005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.714318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.714357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.714750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.108 [2024-07-25 14:04:31.714792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.108 qpair failed and we were unable to recover it. 00:36:35.108 [2024-07-25 14:04:31.715176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.715216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.715587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.715627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.716004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.716018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.716345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.716385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.716694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.716744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.717036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.717048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.717311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.717323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.717618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.717631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.717940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.717982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.718334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.718374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.718696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.718708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.719042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.719083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.719443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.719483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.719875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.719916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.720206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.720246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.720555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.720594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.720985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.721027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.721410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.721449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.721812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.721824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.722131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.722143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.722386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.722399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.722726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.722739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.723004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.723016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.723294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.723306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.723628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.723640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.724004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.724016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.724351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.724391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.724758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.724799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.725175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.725187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.725526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.725567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.725935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.725976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.726361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.726401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.726776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.726816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.727151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.727191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.727525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.727565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.727932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.727973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.109 [2024-07-25 14:04:31.728337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.109 [2024-07-25 14:04:31.728377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.109 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.728760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.728801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.729172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.729212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.729618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.729658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.729992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.730004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.730324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.730336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.730579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.730593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.730901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.730913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.731108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.731120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.731432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.731444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.731722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.731734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.732087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.732129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.732438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.732477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.732864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.732905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.733223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.733263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.733571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.733611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.733901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.733942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.734326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.734366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.734748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.734790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.735171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.735210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.735596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.735636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.735951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.735963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.736230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.736242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.736507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.736519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.736848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.736860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.737224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.737236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.737491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.737531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.737941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.737982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.738269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.738281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.738620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.738660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.110 [2024-07-25 14:04:31.739069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.110 [2024-07-25 14:04:31.739110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.110 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.739440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.739479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.739814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.739827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.740154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.740166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.740500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.740540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.740851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.740863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.741208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.741220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.741516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.741528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.741839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.741852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.742150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.742162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.742487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.742499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.742692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.742704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.742962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.742975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.743299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.743311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.743685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.743697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.743985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.743997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.744303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.744317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.744563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.744575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.744823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.744836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.745162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.745174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.745474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.745486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.745806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.745819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.746188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.746200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.746531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.746544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.746842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.746854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.747182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.747222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.747607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.747647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.748030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.748043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.748360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.748373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.748625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.748637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.748946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.748959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.749272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.749284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.749592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.749604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.749924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.749937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.750137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.750149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.750501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.750540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.750926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.750962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.111 [2024-07-25 14:04:31.751210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.111 [2024-07-25 14:04:31.751222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.111 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.751495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.751507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.751750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.751763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.752055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.752067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.752400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.752412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.752608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.752620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.752846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.752860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.753186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.753198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.753514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.753525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.753882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.753894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.754144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.754156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.754476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.754515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.754860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.754901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.755306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.755346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.755736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.755778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.756158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.756198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.756581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.756621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.756983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.756996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.757302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.757314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.757635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.757649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.757971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.757983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.758225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.758237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.758544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.758557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.758885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.758926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.759311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.759351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.759663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.759704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.760015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.760027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.760328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.760340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.760652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.760692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.761092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.761133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.761492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.761532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.761902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.761914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.762235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.762247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.762553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.762565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.762884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.762897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.763224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.763236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.763540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.763553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.763871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.763884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.112 qpair failed and we were unable to recover it. 00:36:35.112 [2024-07-25 14:04:31.764132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.112 [2024-07-25 14:04:31.764144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.764394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.764406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.764654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.764666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.764849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.764862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.765113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.765125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.765382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.765394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.765666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.765678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.765982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.765994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.766192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.766205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.766524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.766537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.766860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.766872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.767195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.767207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.767530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.767542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.767779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.767792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.767993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.768005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.768250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.768262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.768455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.768467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.768773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.768786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.769087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.769099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.769440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.769453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.769826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.769867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.770246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.770258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.770602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.770614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.770963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.770976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.771220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.771232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.771561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.771574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.771818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.771831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.772144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.772156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.772468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.772480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.772799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.772834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.773198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.773238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.773615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.773655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.774057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.774070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.774339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.774378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.774709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.774771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.775146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.775187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.775571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.113 [2024-07-25 14:04:31.775611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.113 qpair failed and we were unable to recover it. 00:36:35.113 [2024-07-25 14:04:31.776005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.776019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.776214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.776226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.776525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.776537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.776797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.776809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.777054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.777066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.777333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.777345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.777578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.777591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.777767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.777779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.778104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.778116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.778453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.778465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.778763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.778775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.778978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.779023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.779325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.779364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.779682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.779733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.780136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.780176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.780529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.780541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.780864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.780905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.781257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.781297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.781623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.781664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.782118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.782159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.782539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.782579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.782960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.783002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.783401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.783441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.783801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.783842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.784226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.784265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.784619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.784659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.784935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.784947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.785289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.785301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.785569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.785581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.785814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.785827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.786144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.786156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.786455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.786467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.786719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.786732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.786995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.787008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.787294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.787307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.787627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.787639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.114 [2024-07-25 14:04:31.787950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.114 [2024-07-25 14:04:31.787991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.114 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.788365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.788405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.788797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.788839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.789169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.789209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.789571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.789611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.789950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.789962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.790244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.790257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.790583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.790595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.790853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.790865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.791177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.791189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.791485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.791497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.791699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.791711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.791895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.791908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.792137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.792148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.792421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.792433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.792764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.792778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.793104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.793116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.793361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.793388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.793701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.793713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.794037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.794087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.794446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.794486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.794877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.794918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.795302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.795315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.795634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.795646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.796031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.796072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.796369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.796409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.796781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.796793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.797093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.797105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.797456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.797468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.797827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.797869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.798183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.798222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.798606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.115 [2024-07-25 14:04:31.798646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.115 qpair failed and we were unable to recover it. 00:36:35.115 [2024-07-25 14:04:31.798991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.799032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.799329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.799341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.799664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.799676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.800012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.800025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.800282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.800294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.800545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.800557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.800936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.800978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.801407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.801446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.801756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.801797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.802186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.802226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.802554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.802593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.802901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.802942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.803324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.803336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.803664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.803703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.804077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.804118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.804489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.804528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.804830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.804871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.805180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.805220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.805522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.805561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.805939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.805951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.806252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.806265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.806582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.806594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.806847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.806860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.807024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.807041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.807363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.807375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.807624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.807636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.807973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.808014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.808392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.808432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.808729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.808770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.809075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.809116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.809500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.809540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.809928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.809968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.810345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.810357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.810728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.810769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.811080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.811119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.811485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.811525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.811911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.116 [2024-07-25 14:04:31.811952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.116 qpair failed and we were unable to recover it. 00:36:35.116 [2024-07-25 14:04:31.812239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.812252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.812494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.812505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.812822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.812835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.813162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.813174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.813500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.813512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.813815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.813827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.814158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.814197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.814578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.814618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.815005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.815046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.815407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.815447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.815828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.815841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.816167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.816206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.816499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.816539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.816887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.816929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.817332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.817372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.817687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.817733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.818056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.818068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.818392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.818404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.818602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.818615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.818957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.818970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.819203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.819216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.819483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.819522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.819929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.819969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.820415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.820455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.820784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.820824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.821108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.821120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.821395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.821409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.821655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.821668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.821970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.821982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.822312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.822324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.822653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.822693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.822964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.823004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.823290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.823303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.823637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.823650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.823966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.824008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.824320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.824359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.117 [2024-07-25 14:04:31.824742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.117 [2024-07-25 14:04:31.824783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.117 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.825076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.825117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.825501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.825540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.825901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.825942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.826344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.826383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.826769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.826805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.827127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.827139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.827495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.827534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.827896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.827937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.828253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.828294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.828676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.828724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.829078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.829124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.829438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.829478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.829813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.829854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.830215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.830255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.830644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.830683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.831046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.831059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.831330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.831343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.831676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.831688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.832032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.832045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.832341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.832381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.832754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.832766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.833021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.833033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.833297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.833309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.833627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.833639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.833911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.833924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.834229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.834241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.834543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.834555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.834887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.834928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.835312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.835352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.835737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.835785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.836172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.836211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.836517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.836556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.836940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.836982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.837342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.837381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.837723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.837764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.838148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.838188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.838570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.118 [2024-07-25 14:04:31.838609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.118 qpair failed and we were unable to recover it. 00:36:35.118 [2024-07-25 14:04:31.838839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.838881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.839268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.839308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.839691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.839751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.840095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.840107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.840395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.840407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.840737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.840778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.841169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.841209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.841459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.841498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.841854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.841867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.842184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.842224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.842589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.842629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.842944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.842985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.843279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.843319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.843679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.843726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.844001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.844013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.844280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.844292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.844592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.844605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.844856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.844869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.845109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.845121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.845448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.845461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.845767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.845808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.846184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.846224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.846479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.846519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.846910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.846951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.847259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.847299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.847680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.847730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.848134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.848174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.848512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.848524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.848704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.848723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.848965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.848977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.849230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.849242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.849494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.849506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.849748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.849763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.850007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.850019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.850249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.850261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.850586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.850599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.850882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.119 [2024-07-25 14:04:31.850897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.119 qpair failed and we were unable to recover it. 00:36:35.119 [2024-07-25 14:04:31.851151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.851190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.851501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.851541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.851855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.851897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.852261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.852301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.852538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.852577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.852893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.852938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.853207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.853246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.853596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.853635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.854030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.854069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.854367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.854406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.854792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.854834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.855167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.855202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.855548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.855597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.855909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.855950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.856307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.856320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.856646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.856659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.856968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.857009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.857369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.857410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.857735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.857779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.858189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.858229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.858618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.858676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.858960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.859001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.859393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.859433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.859797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.859810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.860071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.860110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.860401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.860441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.860826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.860868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.861161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.861201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.861603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.861643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.862041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.862082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.862454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.862494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.120 qpair failed and we were unable to recover it. 00:36:35.120 [2024-07-25 14:04:31.862860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.120 [2024-07-25 14:04:31.862901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.863212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.863253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.863567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.863606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.863862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.863904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.864220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.864267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.864603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.864643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.864953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.864994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.865379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.865420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.865782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.865823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.866133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.866147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.866359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.866373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.866671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.866684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.866987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.867000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.867197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.867209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.867527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.867539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.867861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.867875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.868121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.868133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.868468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.868481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.868665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.868677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.868910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.868923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.869139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.869178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.869601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.869641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.869980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.870021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.870320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.870360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.870750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.870791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.871130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.871169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.871535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.871575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.871885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.871897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.872147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.872159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.872403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.872415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.872668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.872680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.872910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.872923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.873172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.873184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.873517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.873529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.873897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.873910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.874172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.874212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.121 [2024-07-25 14:04:31.874622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.121 [2024-07-25 14:04:31.874662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.121 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.875009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.875051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.875376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.875388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.875652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.875664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.875968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.875980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.876235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.876248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.876522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.876535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.876812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.876825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.877147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.877163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.877497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.877508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.877763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.877799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.878126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.878166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.878519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.878558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.878891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.878935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.879189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.879202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.879532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.879544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.879815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.879828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.880084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.880096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.880345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.880357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.880600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.880613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.880915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.880928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.881282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.881322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.881626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.881666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.882031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.882044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.882367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.882380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.882705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.882720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.882993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.883033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.883262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.883302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.883711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.883763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.884101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.884139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.884448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.884461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.884712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.884729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.885027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.885039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.885282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.885294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.885475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.885487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.885818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.885836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.886120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.886132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.886454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.122 [2024-07-25 14:04:31.886466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.122 qpair failed and we were unable to recover it. 00:36:35.122 [2024-07-25 14:04:31.886723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.886736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.887009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.887021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.887254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.887266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.887599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.887611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.887920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.887962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.888333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.888373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.888705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.888755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.889139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.889179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.889517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.889529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.889965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.890007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.890422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.890469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.890852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.890893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.891186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.891198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.891509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.891521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.891862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.891875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.892229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.892268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.892581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.892621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.892941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.892983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.893289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.893330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.893651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.893663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.894010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.894054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.894293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.894333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.894725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.894766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.895066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.895095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.895295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.895307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.895640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.895680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.896055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.896095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.896423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.896435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.896742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.896784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.897154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.897194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.897521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.897561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.897947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.897988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.898342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.898354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.898589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.898629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.898943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.898987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.899235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.899247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.899580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.123 [2024-07-25 14:04:31.899592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.123 qpair failed and we were unable to recover it. 00:36:35.123 [2024-07-25 14:04:31.899972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.900013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.900348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.900389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.900697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.900746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.901172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.901212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.901596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.901635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.902062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.902103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.902377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.902388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.902579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.902591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.902834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.902846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.903166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.903178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.903430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.903442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.903743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.903755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.904000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.904013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.904295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.904309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.904634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.904646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.904963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.904975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.905212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.905224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.905507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.905519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.905826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.905838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.906103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.906116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.906440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.906452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.906785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.906798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.906988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.907000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.907249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.907262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.907582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.907594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.907823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.907836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.908173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.908212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.908603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.908643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.909045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.909086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.909421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.909461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.909836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.909849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.910169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.910181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.910418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.910458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.124 [2024-07-25 14:04:31.910780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.124 [2024-07-25 14:04:31.910821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.124 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.911204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.911244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.911625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.911665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.912020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.912060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.912373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.912385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.912726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.912765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.913065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.913110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.913323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x605b30 is same with the state(5) to be set 00:36:35.125 [2024-07-25 14:04:31.913810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.913866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.914201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.914243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.914628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.914667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.915070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.915110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.915436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.915477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.915787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.915829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.916049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.916061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.916283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.916295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.916762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.916804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.917193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.917233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.917607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.917646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.917945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.917986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.918269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.918311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.918554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.918566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.918901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.918942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.919275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.919315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.919709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.919760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.920077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.920117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.920496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.920537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.920902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.920944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.921324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.921364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.921669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.921708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.922079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.922120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.922496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.922537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.922943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.922984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.923301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.923341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.923701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.923760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.924071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.924110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.924367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.125 [2024-07-25 14:04:31.924379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.125 qpair failed and we were unable to recover it. 00:36:35.125 [2024-07-25 14:04:31.924653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.924665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.924931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.924944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.925244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.925257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.925596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.925631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.925933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.925974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.926336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.926375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.926759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.926800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.927132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.927172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.927433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.927446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.927703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.927719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.928021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.928033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.928228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.928240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.928491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.928503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.928772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.928785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.929098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.929111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.929343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.929355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.929590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.929602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.929950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.929962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.930309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.930349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.930734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.930776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.931099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.931151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.931424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.931436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.931695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.931708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.932048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.932060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.932260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.932273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.932619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.932658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.933081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.933122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.933485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.933525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.933765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.933807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.934055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.934094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.934333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.934379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.934693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.934705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.934959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.934971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.935201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.935212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.935536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.935548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.935771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.935784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.936028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.936040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.126 qpair failed and we were unable to recover it. 00:36:35.126 [2024-07-25 14:04:31.936281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.126 [2024-07-25 14:04:31.936295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.936558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.936570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.936866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.936878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.937131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.937143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.937347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.937359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.937700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.937713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.938044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.938056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.938289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.938301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.938615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.938627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.938868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.938897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.939171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.939183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.939516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.939529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.939817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.939858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.940121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.940161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.940509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.940561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.940948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.940990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.941374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.941414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.941674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.941723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.942100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.942139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.942457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.942496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.942858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.942900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.943202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.943242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.943670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.943710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.944047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.944087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.944376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.944415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.944799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.944840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.945178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.945218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.945483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.945495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.945813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.945825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.946136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.946176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.946550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.946590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.946966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.947007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.947295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.947334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.947646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.947686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.948072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.948084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.948333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.948345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.948668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.127 [2024-07-25 14:04:31.948680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.127 qpair failed and we were unable to recover it. 00:36:35.127 [2024-07-25 14:04:31.948928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.948969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.949356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.949396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.949702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.949752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.950113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.950162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.950551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.950591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.950955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.950996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.951380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.951419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.951802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.951844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.952222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.952234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.952533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.952545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.952804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.952845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.953136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.953175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.953477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.953489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.953797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.953810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.954046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.954058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.954311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.954323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.954557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.954569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.954896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.954908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.955204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.955216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.955524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.955535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.955921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.955962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.956362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.956401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.956765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.956778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.957038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.957077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.957332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.957371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.957738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.957780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.958161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.958200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.958459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.958498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.958856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.958897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.959240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.959280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.959688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.959737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.960118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.960157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.960485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.960511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.960894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.960935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.961327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.961367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.961748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.961789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.962178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.962218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.128 [2024-07-25 14:04:31.962601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.128 [2024-07-25 14:04:31.962640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.128 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.962941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.962982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.963371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.963410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.963749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.963790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.964100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.964139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.964505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.964546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.964859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.964905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.965265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.965304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.965603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.965615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.965850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.965863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.966195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.966207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.966436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.966449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.966771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.966784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.966964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.966976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.967218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.967230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.967473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.967485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.967804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.967817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.968061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.968073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.968370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.968382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.968625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.968638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.968807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.968819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.969167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.969207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.969613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.969653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.969988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.970029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.970411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.970451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.970758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.970799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.971119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.971159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.971470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.971510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.971803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.971845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.972221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.972261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.972632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.972672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.973138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.129 [2024-07-25 14:04:31.973216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.129 qpair failed and we were unable to recover it. 00:36:35.129 [2024-07-25 14:04:31.973586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.130 [2024-07-25 14:04:31.973604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.130 qpair failed and we were unable to recover it. 00:36:35.130 [2024-07-25 14:04:31.973890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.130 [2024-07-25 14:04:31.973909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.130 qpair failed and we were unable to recover it. 00:36:35.130 [2024-07-25 14:04:31.974225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.130 [2024-07-25 14:04:31.974266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.130 qpair failed and we were unable to recover it. 00:36:35.130 [2024-07-25 14:04:31.974508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.130 [2024-07-25 14:04:31.974548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.130 qpair failed and we were unable to recover it. 00:36:35.130 [2024-07-25 14:04:31.974907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.130 [2024-07-25 14:04:31.974948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.130 qpair failed and we were unable to recover it. 00:36:35.130 [2024-07-25 14:04:31.975190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.130 [2024-07-25 14:04:31.975202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.130 qpair failed and we were unable to recover it. 00:36:35.130 [2024-07-25 14:04:31.975459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.130 [2024-07-25 14:04:31.975471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.130 qpair failed and we were unable to recover it. 00:36:35.130 [2024-07-25 14:04:31.975794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.130 [2024-07-25 14:04:31.975806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.130 qpair failed and we were unable to recover it. 00:36:35.130 [2024-07-25 14:04:31.976129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.130 [2024-07-25 14:04:31.976141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.130 qpair failed and we were unable to recover it. 00:36:35.130 [2024-07-25 14:04:31.976439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.130 [2024-07-25 14:04:31.976451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.130 qpair failed and we were unable to recover it. 00:36:35.130 [2024-07-25 14:04:31.976731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.130 [2024-07-25 14:04:31.976771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.130 qpair failed and we were unable to recover it. 00:36:35.130 [2024-07-25 14:04:31.977156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.130 [2024-07-25 14:04:31.977196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.130 qpair failed and we were unable to recover it. 00:36:35.130 [2024-07-25 14:04:31.977578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.130 [2024-07-25 14:04:31.977617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.130 qpair failed and we were unable to recover it. 00:36:35.130 [2024-07-25 14:04:31.977877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.130 [2024-07-25 14:04:31.977918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.130 qpair failed and we were unable to recover it. 00:36:35.130 [2024-07-25 14:04:31.978211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.130 [2024-07-25 14:04:31.978256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.130 qpair failed and we were unable to recover it. 00:36:35.130 [2024-07-25 14:04:31.978554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.130 [2024-07-25 14:04:31.978567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.130 qpair failed and we were unable to recover it. 00:36:35.130 [2024-07-25 14:04:31.978866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.130 [2024-07-25 14:04:31.978878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.130 qpair failed and we were unable to recover it. 00:36:35.130 [2024-07-25 14:04:31.979114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.130 [2024-07-25 14:04:31.979147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.130 qpair failed and we were unable to recover it. 00:36:35.130 [2024-07-25 14:04:31.979482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.130 [2024-07-25 14:04:31.979521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.130 qpair failed and we were unable to recover it. 00:36:35.130 [2024-07-25 14:04:31.979780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.130 [2024-07-25 14:04:31.979822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.130 qpair failed and we were unable to recover it. 00:36:35.130 [2024-07-25 14:04:31.980145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.130 [2024-07-25 14:04:31.980185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.130 qpair failed and we were unable to recover it. 00:36:35.130 [2024-07-25 14:04:31.980516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.130 [2024-07-25 14:04:31.980556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.130 qpair failed and we were unable to recover it. 00:36:35.130 [2024-07-25 14:04:31.980847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.130 [2024-07-25 14:04:31.980888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.130 qpair failed and we were unable to recover it. 00:36:35.130 [2024-07-25 14:04:31.981266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.981305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.981627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.981641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.981876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.981889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.982141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.982153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.982403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.982415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.982753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.982766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.983028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.983075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.983383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.983422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.983792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.983805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.984127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.984140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.984412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.984451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.984788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.984829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.985057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.985097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.985469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.985509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.985867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.985908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.986235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.986275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.986580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.986620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.986924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.986965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.987376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.987422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.987687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.987754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.988166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.988207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.988543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.988582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.988916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.988957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.989131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.989172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.989533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.989572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.989987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.990028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.990356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.990373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.990687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.990704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.990994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.991034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.991417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.991456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.991857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.413 [2024-07-25 14:04:31.991898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.413 qpair failed and we were unable to recover it. 00:36:35.413 [2024-07-25 14:04:31.992213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:31.992263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:31.992560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:31.992576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:31.992841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:31.992858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:31.993187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:31.993225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:31.993491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:31.993530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:31.993846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:31.993888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:31.994220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:31.994260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:31.994499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:31.994515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:31.994823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:31.994840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:31.995164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:31.995180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:31.995363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:31.995380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:31.995660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:31.995677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:31.995918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:31.995934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:31.996058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:31.996075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:31.996314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:31.996333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:31.996665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:31.996682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:31.996943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:31.996960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:31.997289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:31.997305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:31.997623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:31.997662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:31.997910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:31.997951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:31.998318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:31.998335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:31.998575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:31.998592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:31.998836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:31.998853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:31.999130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:31.999146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:31.999339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:31.999356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:31.999608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:31.999648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:31.999969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:32.000010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:32.000324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:32.000340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:32.000610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:32.000627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:32.000808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:32.000825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:32.001108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:32.001125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:32.001434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:32.001450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:32.001632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:32.001648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:32.001892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:32.001909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:32.002113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:32.002130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:32.002396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:32.002436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:32.002740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:32.002782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:32.003144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.414 [2024-07-25 14:04:32.003184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.414 qpair failed and we were unable to recover it. 00:36:35.414 [2024-07-25 14:04:32.003481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.003521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.003905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.003945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.004278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.004317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.004607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.004647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.005017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.005059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.005322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.005361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.005599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.005615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.005804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.005821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.006168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.006185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.006377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.006417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.006753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.006794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.007110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.007150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.007441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.007481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.007906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.007922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.008194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.008210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.008415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.008432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.008669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.008685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.008951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.008971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.009245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.009262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.009520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.009536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.009790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.009807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.009989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.010006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.010194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.010233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.010490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.010529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.010906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.010946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.011312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.011352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.011598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.011615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.011874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.011890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.012151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.012167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.012425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.012442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.012774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.012792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.012986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.013003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.013188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.013204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.013403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.013420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.013695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.013711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.014023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.014040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.014240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.014256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.415 qpair failed and we were unable to recover it. 00:36:35.415 [2024-07-25 14:04:32.014588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.415 [2024-07-25 14:04:32.014604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.014931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.014948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.015226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.015243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.015574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.015591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.015873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.015889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.016149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.016189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.016490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.016530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.016916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.016963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.017258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.017298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.017448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.017464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.017791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.017808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.018047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.018064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.018323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.018340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.018619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.018635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.018911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.018928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.019169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.019185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.019494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.019511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.019802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.019819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.020150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.020166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.020389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.020428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.020682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.020734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.021190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.021268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.021571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.021607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.021959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.021978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.022163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.022181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.022444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.022461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.022753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.022770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.023017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.023033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.023391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.023407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.023616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.023632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.023879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.023897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.024156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.024172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.024480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.024497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.024763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.024780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.024990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.025010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.025380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.025397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.025723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.025740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.025995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.416 [2024-07-25 14:04:32.026012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.416 qpair failed and we were unable to recover it. 00:36:35.416 [2024-07-25 14:04:32.026264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.026280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.026597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.026614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.026860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.026877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.027257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.027297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.027591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.027642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.027885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.027902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.028257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.028274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.028527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.028544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.028797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.028814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.029112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.029128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.029504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.029521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.029721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.029738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.029983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.029999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.030261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.030278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.030525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.030542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.030729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.030746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.031085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.031124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.031361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.031378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.031726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.031775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.032072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.032113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.032470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.032510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.032806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.032848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.033217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.033257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.033596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.033644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.033928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.033971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.034340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.034381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.034765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.034808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.035124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.417 [2024-07-25 14:04:32.035163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.417 qpair failed and we were unable to recover it. 00:36:35.417 [2024-07-25 14:04:32.035467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.035484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.035593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.035609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.035861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.035877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.036197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.036237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.036553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.036593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.036814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.036831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.037029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.037045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.037298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.037314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.037656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.037700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.038040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.038080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.038395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.038435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.038760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.038801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.039096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.039135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.039511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.039527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.039881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.039928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.040219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.040259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.040642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.040690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.041068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.041108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.041336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.041375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.041763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.041803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.042043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.042083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.042465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.042504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.042868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.042885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.043168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.043185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.043509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.043548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.043832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.043873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.044187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.044226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.044605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.044648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.418 [2024-07-25 14:04:32.044962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.418 [2024-07-25 14:04:32.044979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.418 qpair failed and we were unable to recover it. 00:36:35.419 [2024-07-25 14:04:32.045309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.419 [2024-07-25 14:04:32.045325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.419 qpair failed and we were unable to recover it. 00:36:35.419 [2024-07-25 14:04:32.045577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.419 [2024-07-25 14:04:32.045594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.419 qpair failed and we were unable to recover it. 00:36:35.419 [2024-07-25 14:04:32.045796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.419 [2024-07-25 14:04:32.045813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.419 qpair failed and we were unable to recover it. 00:36:35.419 [2024-07-25 14:04:32.046068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.419 [2024-07-25 14:04:32.046084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.419 qpair failed and we were unable to recover it. 00:36:35.419 [2024-07-25 14:04:32.046397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.419 [2024-07-25 14:04:32.046413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.419 qpair failed and we were unable to recover it. 00:36:35.419 [2024-07-25 14:04:32.046679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.419 [2024-07-25 14:04:32.046695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.419 qpair failed and we were unable to recover it. 00:36:35.419 [2024-07-25 14:04:32.046962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.419 [2024-07-25 14:04:32.046979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.419 qpair failed and we were unable to recover it. 00:36:35.419 [2024-07-25 14:04:32.047230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.419 [2024-07-25 14:04:32.047246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.419 qpair failed and we were unable to recover it. 00:36:35.419 [2024-07-25 14:04:32.047523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.419 [2024-07-25 14:04:32.047539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.419 qpair failed and we were unable to recover it. 00:36:35.419 [2024-07-25 14:04:32.047891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.419 [2024-07-25 14:04:32.047908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.419 qpair failed and we were unable to recover it. 00:36:35.419 [2024-07-25 14:04:32.048198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.419 [2024-07-25 14:04:32.048237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.419 qpair failed and we were unable to recover it. 00:36:35.419 [2024-07-25 14:04:32.048656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.419 [2024-07-25 14:04:32.048695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.419 qpair failed and we were unable to recover it. 00:36:35.419 [2024-07-25 14:04:32.048918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.419 [2024-07-25 14:04:32.048935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.419 qpair failed and we were unable to recover it. 00:36:35.419 [2024-07-25 14:04:32.049172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.419 [2024-07-25 14:04:32.049188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.419 qpair failed and we were unable to recover it. 00:36:35.419 [2024-07-25 14:04:32.049531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.419 [2024-07-25 14:04:32.049548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.419 qpair failed and we were unable to recover it. 00:36:35.419 [2024-07-25 14:04:32.049834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.419 [2024-07-25 14:04:32.049850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.419 qpair failed and we were unable to recover it. 00:36:35.419 [2024-07-25 14:04:32.050171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.419 [2024-07-25 14:04:32.050187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.419 qpair failed and we were unable to recover it. 00:36:35.419 [2024-07-25 14:04:32.050395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.419 [2024-07-25 14:04:32.050411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.419 qpair failed and we were unable to recover it. 00:36:35.419 [2024-07-25 14:04:32.050686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.419 [2024-07-25 14:04:32.050702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.419 qpair failed and we were unable to recover it. 00:36:35.419 [2024-07-25 14:04:32.051042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.419 [2024-07-25 14:04:32.051087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.419 qpair failed and we were unable to recover it. 00:36:35.419 [2024-07-25 14:04:32.051378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.419 [2024-07-25 14:04:32.051418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.419 qpair failed and we were unable to recover it. 00:36:35.419 [2024-07-25 14:04:32.051736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.419 [2024-07-25 14:04:32.051776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.419 qpair failed and we were unable to recover it. 00:36:35.419 [2024-07-25 14:04:32.052140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.419 [2024-07-25 14:04:32.052180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.419 qpair failed and we were unable to recover it. 00:36:35.419 [2024-07-25 14:04:32.052493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.419 [2024-07-25 14:04:32.052509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.419 qpair failed and we were unable to recover it. 00:36:35.419 [2024-07-25 14:04:32.052766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.419 [2024-07-25 14:04:32.052783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.419 qpair failed and we were unable to recover it. 00:36:35.419 [2024-07-25 14:04:32.053025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.419 [2024-07-25 14:04:32.053041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.419 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.053305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.053322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.053653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.053670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.054020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.054060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.054395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.054434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.054816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.054857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.055095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.055135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.055518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.055557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.055899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.055940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.056188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.056227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.056466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.056506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.056882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.056899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.057224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.057263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.057648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.057688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.058032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.058072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.058432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.058471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.058827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.058844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.059176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.059192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.059383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.059400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.059640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.059656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.059840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.059856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.060120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.060160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.060541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.060576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.060849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.060866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.061107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.061123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.061516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.061555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.061885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.061926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.062313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.062352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.062749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.062766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.063037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.063077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.420 qpair failed and we were unable to recover it. 00:36:35.420 [2024-07-25 14:04:32.063387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.420 [2024-07-25 14:04:32.063427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.063804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.063821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.064075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.064114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.064468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.064497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.064881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.064927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.065257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.065301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.065587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.065603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.065956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.065973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.066252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.066268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.066480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.066496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.066827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.066844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.067018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.067035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.067361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.067377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.067648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.067664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.067872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.067889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.068219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.068235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.068590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.068606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.068880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.068897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.069209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.069225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.069587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.069604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.069934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.069950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.070206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.070223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.070475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.070508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.070734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.070775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.071005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.071044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.071355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.071395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.071666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.071682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.071968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.071985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.072315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.072361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.072674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.072735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.421 qpair failed and we were unable to recover it. 00:36:35.421 [2024-07-25 14:04:32.073121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.421 [2024-07-25 14:04:32.073160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.422 qpair failed and we were unable to recover it. 00:36:35.422 [2024-07-25 14:04:32.073549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.422 [2024-07-25 14:04:32.073589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.422 qpair failed and we were unable to recover it. 00:36:35.422 [2024-07-25 14:04:32.073897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.422 [2024-07-25 14:04:32.073939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.422 qpair failed and we were unable to recover it. 00:36:35.422 [2024-07-25 14:04:32.074299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.422 [2024-07-25 14:04:32.074339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.422 qpair failed and we were unable to recover it. 00:36:35.422 [2024-07-25 14:04:32.074653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.422 [2024-07-25 14:04:32.074670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.422 qpair failed and we were unable to recover it. 00:36:35.422 [2024-07-25 14:04:32.074947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.422 [2024-07-25 14:04:32.074964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.422 qpair failed and we were unable to recover it. 00:36:35.422 [2024-07-25 14:04:32.075203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.422 [2024-07-25 14:04:32.075220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.422 qpair failed and we were unable to recover it. 00:36:35.422 [2024-07-25 14:04:32.075552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.422 [2024-07-25 14:04:32.075568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.422 qpair failed and we were unable to recover it. 00:36:35.422 [2024-07-25 14:04:32.075846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.422 [2024-07-25 14:04:32.075863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.422 qpair failed and we were unable to recover it. 00:36:35.422 [2024-07-25 14:04:32.076187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.422 [2024-07-25 14:04:32.076203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.422 qpair failed and we were unable to recover it. 00:36:35.422 [2024-07-25 14:04:32.076463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.422 [2024-07-25 14:04:32.076479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.422 qpair failed and we were unable to recover it. 00:36:35.422 [2024-07-25 14:04:32.076666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.422 [2024-07-25 14:04:32.076682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.422 qpair failed and we were unable to recover it. 00:36:35.422 [2024-07-25 14:04:32.077014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.422 [2024-07-25 14:04:32.077030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.422 qpair failed and we were unable to recover it. 00:36:35.422 [2024-07-25 14:04:32.077287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.422 [2024-07-25 14:04:32.077303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.422 qpair failed and we were unable to recover it. 00:36:35.422 [2024-07-25 14:04:32.077426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.422 [2024-07-25 14:04:32.077444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.422 qpair failed and we were unable to recover it. 00:36:35.422 [2024-07-25 14:04:32.077705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.422 [2024-07-25 14:04:32.077755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.422 qpair failed and we were unable to recover it. 00:36:35.422 [2024-07-25 14:04:32.078134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.422 [2024-07-25 14:04:32.078173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.422 qpair failed and we were unable to recover it. 00:36:35.422 [2024-07-25 14:04:32.078535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.422 [2024-07-25 14:04:32.078574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.422 qpair failed and we were unable to recover it. 00:36:35.422 [2024-07-25 14:04:32.078954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.422 [2024-07-25 14:04:32.078995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.422 qpair failed and we were unable to recover it. 00:36:35.422 [2024-07-25 14:04:32.079371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.422 [2024-07-25 14:04:32.079410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.422 qpair failed and we were unable to recover it. 00:36:35.422 [2024-07-25 14:04:32.079628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.422 [2024-07-25 14:04:32.079668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.422 qpair failed and we were unable to recover it. 00:36:35.422 [2024-07-25 14:04:32.080039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.422 [2024-07-25 14:04:32.080080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.422 qpair failed and we were unable to recover it. 00:36:35.422 [2024-07-25 14:04:32.080436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.422 [2024-07-25 14:04:32.080475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.422 qpair failed and we were unable to recover it. 00:36:35.422 [2024-07-25 14:04:32.080731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.422 [2024-07-25 14:04:32.080748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.422 qpair failed and we were unable to recover it. 00:36:35.422 [2024-07-25 14:04:32.080989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.422 [2024-07-25 14:04:32.081006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.422 qpair failed and we were unable to recover it. 00:36:35.422 [2024-07-25 14:04:32.081248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.422 [2024-07-25 14:04:32.081264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.422 qpair failed and we were unable to recover it. 00:36:35.422 [2024-07-25 14:04:32.081518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.422 [2024-07-25 14:04:32.081534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.422 qpair failed and we were unable to recover it. 00:36:35.422 [2024-07-25 14:04:32.081792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.423 [2024-07-25 14:04:32.081809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.423 qpair failed and we were unable to recover it. 00:36:35.423 [2024-07-25 14:04:32.082106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.423 [2024-07-25 14:04:32.082123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.423 qpair failed and we were unable to recover it. 00:36:35.423 [2024-07-25 14:04:32.082431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.423 [2024-07-25 14:04:32.082448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.423 qpair failed and we were unable to recover it. 00:36:35.423 [2024-07-25 14:04:32.082610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.423 [2024-07-25 14:04:32.082627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.423 qpair failed and we were unable to recover it. 00:36:35.423 [2024-07-25 14:04:32.082811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.423 [2024-07-25 14:04:32.082828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.423 qpair failed and we were unable to recover it. 00:36:35.423 [2024-07-25 14:04:32.083159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.423 [2024-07-25 14:04:32.083202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.423 qpair failed and we were unable to recover it. 00:36:35.423 [2024-07-25 14:04:32.083573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.423 [2024-07-25 14:04:32.083613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.423 qpair failed and we were unable to recover it. 00:36:35.423 [2024-07-25 14:04:32.083990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.423 [2024-07-25 14:04:32.084007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.423 qpair failed and we were unable to recover it. 00:36:35.423 [2024-07-25 14:04:32.084262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.423 [2024-07-25 14:04:32.084279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.423 qpair failed and we were unable to recover it. 00:36:35.423 [2024-07-25 14:04:32.084590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.423 [2024-07-25 14:04:32.084607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.423 qpair failed and we were unable to recover it. 00:36:35.423 [2024-07-25 14:04:32.084795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.423 [2024-07-25 14:04:32.084812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.423 qpair failed and we were unable to recover it. 00:36:35.423 [2024-07-25 14:04:32.085125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.423 [2024-07-25 14:04:32.085164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.423 qpair failed and we were unable to recover it. 00:36:35.423 [2024-07-25 14:04:32.085525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.423 [2024-07-25 14:04:32.085565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.423 qpair failed and we were unable to recover it. 00:36:35.423 [2024-07-25 14:04:32.085899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.423 [2024-07-25 14:04:32.085939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.423 qpair failed and we were unable to recover it. 00:36:35.423 [2024-07-25 14:04:32.086353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.423 [2024-07-25 14:04:32.086395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.423 qpair failed and we were unable to recover it. 00:36:35.423 [2024-07-25 14:04:32.086753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.423 [2024-07-25 14:04:32.086794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.423 qpair failed and we were unable to recover it. 00:36:35.423 [2024-07-25 14:04:32.087060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.423 [2024-07-25 14:04:32.087099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.423 qpair failed and we were unable to recover it. 00:36:35.423 [2024-07-25 14:04:32.087487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.423 [2024-07-25 14:04:32.087527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.423 qpair failed and we were unable to recover it. 00:36:35.423 [2024-07-25 14:04:32.087871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.423 [2024-07-25 14:04:32.087888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.423 qpair failed and we were unable to recover it. 00:36:35.423 [2024-07-25 14:04:32.088217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.423 [2024-07-25 14:04:32.088256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.423 qpair failed and we were unable to recover it. 00:36:35.423 [2024-07-25 14:04:32.088634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.423 [2024-07-25 14:04:32.088674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.423 qpair failed and we were unable to recover it. 00:36:35.423 [2024-07-25 14:04:32.089057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.423 [2024-07-25 14:04:32.089098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.423 qpair failed and we were unable to recover it. 00:36:35.423 [2024-07-25 14:04:32.089388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.423 [2024-07-25 14:04:32.089428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.423 qpair failed and we were unable to recover it. 00:36:35.423 [2024-07-25 14:04:32.089837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.423 [2024-07-25 14:04:32.089878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.423 qpair failed and we were unable to recover it. 00:36:35.423 [2024-07-25 14:04:32.090186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.423 [2024-07-25 14:04:32.090225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.423 qpair failed and we were unable to recover it. 00:36:35.423 [2024-07-25 14:04:32.090555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.423 [2024-07-25 14:04:32.090595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.423 qpair failed and we were unable to recover it. 00:36:35.423 [2024-07-25 14:04:32.090989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.423 [2024-07-25 14:04:32.091029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.091387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.091427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.091666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.091682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.091926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.091943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.092133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.092149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.092433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.092449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.092712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.092740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.092991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.093008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.093377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.093417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.093807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.093867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.094181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.094221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.094581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.094620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.094947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.094964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.095205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.095222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.095477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.095494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.095775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.095793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.096066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.096083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.096270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.096286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.096526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.096542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.096883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.096925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.097283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.097323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.097631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.097648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.097828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.097846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.098034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.098051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.098331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.098371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.098736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.098777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.099149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.099190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.099573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.099613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.099958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.099978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.100221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.100238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.100587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.100604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.100813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.100830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.101192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.101232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.101585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.101625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.102006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.102048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.424 qpair failed and we were unable to recover it. 00:36:35.424 [2024-07-25 14:04:32.102409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.424 [2024-07-25 14:04:32.102449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.102735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.102752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.103001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.103017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.103343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.103360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.103602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.103618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.103870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.103887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.104085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.104102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.104378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.104417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.104810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.104850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.105167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.105206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.105515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.105555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.105963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.106004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.106334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.106373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.106701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.106721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.107045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.107061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.107253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.107269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.107511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.107527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.107765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.107782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.108109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.108125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.108391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.108408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.108727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.108776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.109028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.109068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.109321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.109361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.109648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.109665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.109943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.109960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.110168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.110185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.110502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.110518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.110889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.110929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.111293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.111333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.111722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.111739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.112049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.112065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.112318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.112335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.112586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.112603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.112862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.112882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.113083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.113099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.113340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.113357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.113598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.113615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.425 [2024-07-25 14:04:32.113866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.425 [2024-07-25 14:04:32.113882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.425 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.114212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.114229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.114535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.114587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.114880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.114921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.115308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.115347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.115651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.115690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.115961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.116001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.116383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.116422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.116751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.116791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.117033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.117073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.117467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.117507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.117832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.117849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.118094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.118133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.118508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.118548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.118924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.118964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.119347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.119387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.119658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.119675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.120006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.120023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.120335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.120375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.120758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.120800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.121158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.121175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.121468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.121507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.121753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.121793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.122090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.122131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.122443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.122483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.122864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.122905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.123263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.123303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.123619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.123658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.124002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.124019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.124304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.124343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.124733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.124773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.125085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.125102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.125362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.125378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.125565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.125582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.125776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.125793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.126050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.126066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.126319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.126337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.126667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.426 [2024-07-25 14:04:32.126683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.426 qpair failed and we were unable to recover it. 00:36:35.426 [2024-07-25 14:04:32.126939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.126956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.127269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.127308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.127593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.127609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.127850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.127867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.128175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.128191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.128523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.128539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.128814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.128855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.129161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.129200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.129561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.129601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.129964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.130005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.130310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.130349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.130681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.130727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.131036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.131075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.131456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.131496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.131854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.131895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.132198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.132237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.132597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.132637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.133030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.133071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.133382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.133421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.133786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.133826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.134131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.134147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.134414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.134453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.134816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.134856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.135226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.135265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.135641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.135657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.135876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.135893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.136251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.136291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.136648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.136688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.137078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.137095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.137430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.137470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.137775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.137816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.138151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.138167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.138441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.138458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.138771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.138811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.427 qpair failed and we were unable to recover it. 00:36:35.427 [2024-07-25 14:04:32.139181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.427 [2024-07-25 14:04:32.139221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.428 qpair failed and we were unable to recover it. 00:36:35.428 [2024-07-25 14:04:32.139460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.428 [2024-07-25 14:04:32.139500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.428 qpair failed and we were unable to recover it. 00:36:35.428 [2024-07-25 14:04:32.139807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.428 [2024-07-25 14:04:32.139847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.428 qpair failed and we were unable to recover it. 00:36:35.428 [2024-07-25 14:04:32.140222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.428 [2024-07-25 14:04:32.140262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.428 qpair failed and we were unable to recover it. 00:36:35.428 [2024-07-25 14:04:32.140516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.428 [2024-07-25 14:04:32.140561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.428 qpair failed and we were unable to recover it. 00:36:35.428 [2024-07-25 14:04:32.140863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.428 [2024-07-25 14:04:32.140903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.428 qpair failed and we were unable to recover it. 00:36:35.428 [2024-07-25 14:04:32.141289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.428 [2024-07-25 14:04:32.141330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.428 qpair failed and we were unable to recover it. 00:36:35.428 [2024-07-25 14:04:32.141738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.428 [2024-07-25 14:04:32.141779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.428 qpair failed and we were unable to recover it. 00:36:35.428 [2024-07-25 14:04:32.142102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.428 [2024-07-25 14:04:32.142141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.428 qpair failed and we were unable to recover it. 00:36:35.428 [2024-07-25 14:04:32.142529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.428 [2024-07-25 14:04:32.142569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.428 qpair failed and we were unable to recover it. 00:36:35.428 [2024-07-25 14:04:32.142881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.428 [2024-07-25 14:04:32.142922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.428 qpair failed and we were unable to recover it. 00:36:35.428 [2024-07-25 14:04:32.143237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.428 [2024-07-25 14:04:32.143277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.428 qpair failed and we were unable to recover it. 00:36:35.428 [2024-07-25 14:04:32.143638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.428 [2024-07-25 14:04:32.143678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.428 qpair failed and we were unable to recover it. 00:36:35.428 [2024-07-25 14:04:32.144112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.428 [2024-07-25 14:04:32.144153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.428 qpair failed and we were unable to recover it. 00:36:35.428 [2024-07-25 14:04:32.144460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.428 [2024-07-25 14:04:32.144500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.428 qpair failed and we were unable to recover it. 00:36:35.428 [2024-07-25 14:04:32.144883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.428 [2024-07-25 14:04:32.144924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.428 qpair failed and we were unable to recover it. 00:36:35.428 [2024-07-25 14:04:32.145218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.428 [2024-07-25 14:04:32.145257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.428 qpair failed and we were unable to recover it. 00:36:35.428 [2024-07-25 14:04:32.145570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.428 [2024-07-25 14:04:32.145610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.428 qpair failed and we were unable to recover it. 00:36:35.428 [2024-07-25 14:04:32.145912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.428 [2024-07-25 14:04:32.145953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.428 qpair failed and we were unable to recover it. 00:36:35.428 [2024-07-25 14:04:32.146215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.428 [2024-07-25 14:04:32.146256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.428 qpair failed and we were unable to recover it. 00:36:35.428 [2024-07-25 14:04:32.146638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.428 [2024-07-25 14:04:32.146684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.428 qpair failed and we were unable to recover it. 00:36:35.428 [2024-07-25 14:04:32.146998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.428 [2024-07-25 14:04:32.147014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.428 qpair failed and we were unable to recover it. 00:36:35.428 [2024-07-25 14:04:32.147251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.428 [2024-07-25 14:04:32.147268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.428 qpair failed and we were unable to recover it. 00:36:35.428 [2024-07-25 14:04:32.147601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.428 [2024-07-25 14:04:32.147618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.428 qpair failed and we were unable to recover it. 00:36:35.428 [2024-07-25 14:04:32.147976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.428 [2024-07-25 14:04:32.148016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.428 qpair failed and we were unable to recover it. 00:36:35.428 [2024-07-25 14:04:32.148321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.428 [2024-07-25 14:04:32.148361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.428 qpair failed and we were unable to recover it. 00:36:35.428 [2024-07-25 14:04:32.148611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.428 [2024-07-25 14:04:32.148627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.428 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.148825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.148842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.149102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.149118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.149432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.149472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.149727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.149768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.150072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.150088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.150341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.150358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.150652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.150669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.150855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.150872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.151187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.151227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.151558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.151599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.151791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.151807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.152081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.152097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.152372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.152389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.152647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.152664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.152976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.152993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.153277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.153316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.153571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.153611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.153844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.153891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.154203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.154243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.154539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.154579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.154826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.154866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.155151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.155168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.155419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.155435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.155707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.155729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.155988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.156028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.156388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.156427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.156733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.156751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.157024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.157041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.157351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.157368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.157647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.157686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.158014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.158054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.158361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.158401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.429 qpair failed and we were unable to recover it. 00:36:35.429 [2024-07-25 14:04:32.158690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.429 [2024-07-25 14:04:32.158740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.159097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.159132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.159421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.159462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.159792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.159833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.160217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.160256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.160561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.160613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.160921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.160938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.161181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.161198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.161525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.161559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.161871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.161912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.162275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.162315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.162619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.162658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.163061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.163102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.163340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.163380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.163760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.163800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.164156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.164200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.164503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.164543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.164887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.164905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.165237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.165253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.165513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.165530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.165813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.165854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.166215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.166255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.166621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.166661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.166920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.166937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.167267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.167283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.167521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.167540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.167658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.167674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.167893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.167933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.168228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.168267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.168559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.168598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.168887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.168929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.169290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.169329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.169724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.430 [2024-07-25 14:04:32.169766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.430 qpair failed and we were unable to recover it. 00:36:35.430 [2024-07-25 14:04:32.170128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.170168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.170529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.170568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.170956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.170996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.171308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.171348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.171658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.171697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.172020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.172060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.172447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.172487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.172873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.172914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.173254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.173294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.173515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.173555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.173815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.173856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.174169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.174208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.174568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.174608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.174843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.174860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.175128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.175168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.175475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.175514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.175815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.175832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.176215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.176255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.176639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.176678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.176984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.177001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.177243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.177259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.177593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.177627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.177991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.178032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.178345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.178385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.178532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.178575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.178905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.178922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.179180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.179198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.179384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.179401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.179526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.179542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.179719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.179736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.179905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.431 [2024-07-25 14:04:32.179921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.431 qpair failed and we were unable to recover it. 00:36:35.431 [2024-07-25 14:04:32.180157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.180174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.180409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.180428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.180601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.180617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.180876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.180917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.181251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.181291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.181655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.181695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.182013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.182054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.182347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.182387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.182766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.182807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.182957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.182997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.183289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.183329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.183637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.183677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.183953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.184033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.184368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.184412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.184671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.184712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.185030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.185043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.185274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.185286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.185536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.185548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.185790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.185803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.186101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.186113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.186297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.186309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.186640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.186652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.186897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.186910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.187230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.187242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.187490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.187502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.187801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.187842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.188112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.188151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.188536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.188576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.188909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.188938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.189243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.189283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.189644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.189684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.190035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.190075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.190461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.190500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.190787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.190800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.432 qpair failed and we were unable to recover it. 00:36:35.432 [2024-07-25 14:04:32.191052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.432 [2024-07-25 14:04:32.191064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.191361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.191374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.191690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.191734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.192097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.192136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.192453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.192493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.192859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.192900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.193261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.193300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.193564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.193609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.194006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.194020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.194319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.194330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.194649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.194661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.194984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.194997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.195295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.195308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.195490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.195502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.195767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.195780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.196021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.196033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.196269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.196281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.196529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.196541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.196781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.196793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.197094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.197106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.197352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.197364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.197471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.197482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.197743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.197798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.198108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.198149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.198462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.198502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.198809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.198851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.199211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.199250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.199638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.199677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.199983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.200023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.200431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.200471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.200779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.200820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.201182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.201222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.201536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.201575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.201888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.201901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.202001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.202013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.202244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.202256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.433 [2024-07-25 14:04:32.202553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.433 [2024-07-25 14:04:32.202566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.433 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.202887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.202900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.203128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.203140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.203316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.203328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.203656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.203696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.203995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.204036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.204368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.204379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.204560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.204572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.204887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.204900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.204990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.205002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.205335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.205347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.205607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.205652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.206052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.206093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.206459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.206471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.206809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.206850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.207274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.207314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.207641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.207681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.208076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.208116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.208428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.208467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.208875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.208915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.209212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.209224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.209381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.209393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.209623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.209635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.209770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.209782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.209959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.209971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.210221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.210234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.210388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.210400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.210761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.210803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.211166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.211206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.211513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.211552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.434 qpair failed and we were unable to recover it. 00:36:35.434 [2024-07-25 14:04:32.211862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.434 [2024-07-25 14:04:32.211903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.212265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.212305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.212597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.212637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.213026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.213067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.213407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.213447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.213777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.213817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.214200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.214240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.214625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.214665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.215001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.215042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.215301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.215341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.215583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.215623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.215843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.215884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.216167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.216179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.216489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.216529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.216787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.216828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.217139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.217179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.217473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.217513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.217867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.217907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.218197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.218209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.218532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.218544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.218702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.218724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.219047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.219061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.219255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.219268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.219518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.219530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.219774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.219787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.219964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.219977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.220215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.220255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.220568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.220608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.220913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.220954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.221267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.221306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.221651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.221691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.222010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.222023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.222251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.222263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.222525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.222537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.222705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.222720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.222977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.223017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.435 [2024-07-25 14:04:32.223255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.435 [2024-07-25 14:04:32.223294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.435 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.223624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.223663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.223973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.224014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.224313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.224325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.224641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.224653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.224880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.224892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.225087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.225099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.225333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.225373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.225683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.225730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.226115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.226155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.226456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.226496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.226821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.226834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.227145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.227185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.227425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.227465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.227759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.227800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.228092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.228104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.228370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.228382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.228699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.228712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.228975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.228987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.229099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.229111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.229288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.229311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.229504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.229517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.229812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.229824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.229995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.230007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.230267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.230280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.230475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.230489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.230813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.230826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.230990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.231003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.231166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.231179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.231514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.231554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.231939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.231951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.232052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.232063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.232242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.232255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.232593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.232605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.436 [2024-07-25 14:04:32.232981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.436 [2024-07-25 14:04:32.233022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.436 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.233265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.233305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.233612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.233652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.233958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.233970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.234161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.234173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.234408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.234448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.234808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.234848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.235276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.235315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.235618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.235658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.236036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.236077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.236370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.236410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.236705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.236753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.237139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.237178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.237560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.237600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.237993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.238033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.238280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.238293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.238451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.238463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.238788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.238801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.238984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.238997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.239318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.239330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.239572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.239584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.239837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.239850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.240172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.240185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.240507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.240519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.240846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.240858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.241146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.241158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.241462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.241502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.241882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.241922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.242348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.242389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.242750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.242791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.243142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.243155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.243385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.243399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.243794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.243835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.244244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.244283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.244677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.244689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.245026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.245066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.245467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.245507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.245911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.437 [2024-07-25 14:04:32.245952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.437 qpair failed and we were unable to recover it. 00:36:35.437 [2024-07-25 14:04:32.246217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.246257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.246607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.246655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.246988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.247000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.247200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.247212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.247493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.247533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.247936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.247977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.248285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.248324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.248687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.248734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.249070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.249109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.249420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.249432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.249661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.249673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.249935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.249947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.250245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.250257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.250555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.250567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.250759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.250772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.250972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.250985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.251167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.251179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.251484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.251496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.251795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.251807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.252107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.252119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.252419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.252431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.252699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.252711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.252968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.252980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.253218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.253258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.253514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.253554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.253835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.253848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.254045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.254057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.254243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.254255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.254532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.254572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.254943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.254984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.255283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.255295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.438 qpair failed and we were unable to recover it. 00:36:35.438 [2024-07-25 14:04:32.255536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.438 [2024-07-25 14:04:32.255548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.255847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.255860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.256090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.256103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.256401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.256414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.256663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.256676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.256922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.256934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.257118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.257130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.257368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.257380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.257622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.257634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.257884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.257896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.258088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.258100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.258353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.258366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.258637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.258649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.258948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.258961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.259119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.259132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.259307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.259319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.259501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.259514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.259768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.259809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.260122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.260162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.260444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.260456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.260774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.260786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.261051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.261064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.261294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.261306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.261412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.261423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.261584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.261597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.261701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.261713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.261949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.261961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.262116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.262129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.262298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.262311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.262484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.262498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.262737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.262750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.262996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.263009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.263236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.263248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.263494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.263507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.263805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.263817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.264008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.264021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.264317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.439 [2024-07-25 14:04:32.264330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.439 qpair failed and we were unable to recover it. 00:36:35.439 [2024-07-25 14:04:32.264599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.264612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.264840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.264852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.265175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.265187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.265402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.265413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.265596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.265608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.265923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.265964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.266296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.266337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.266731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.266772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.267023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.267035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.267296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.267308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.267635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.267647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.267945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.267958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.268115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.268127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.268382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.268394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.268646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.268658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.268890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.268903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.269150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.269163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.269478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.269490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.269747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.269759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.269994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.270006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.270253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.270265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.270508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.270521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.270816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.270829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.271126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.271138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.271372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.271384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.271565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.271578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.271769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.271782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.272040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.272052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.272236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.272248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.272484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.272497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.272818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.272830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.273100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.273113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.273401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.273417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.273614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.273626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.273891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.273903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.274138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.440 [2024-07-25 14:04:32.274150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.440 qpair failed and we were unable to recover it. 00:36:35.440 [2024-07-25 14:04:32.274396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.441 [2024-07-25 14:04:32.274408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.441 qpair failed and we were unable to recover it. 00:36:35.441 [2024-07-25 14:04:32.274642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.441 [2024-07-25 14:04:32.274654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.441 qpair failed and we were unable to recover it. 00:36:35.441 [2024-07-25 14:04:32.274919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.441 [2024-07-25 14:04:32.274931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.441 qpair failed and we were unable to recover it. 00:36:35.441 [2024-07-25 14:04:32.275174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.441 [2024-07-25 14:04:32.275186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.441 qpair failed and we were unable to recover it. 00:36:35.441 [2024-07-25 14:04:32.275427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.441 [2024-07-25 14:04:32.275439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.441 qpair failed and we were unable to recover it. 00:36:35.441 [2024-07-25 14:04:32.275668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.441 [2024-07-25 14:04:32.275680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.441 qpair failed and we were unable to recover it. 00:36:35.441 [2024-07-25 14:04:32.275929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.441 [2024-07-25 14:04:32.275942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.441 qpair failed and we were unable to recover it. 00:36:35.441 [2024-07-25 14:04:32.276129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.441 [2024-07-25 14:04:32.276141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.441 qpair failed and we were unable to recover it. 00:36:35.441 [2024-07-25 14:04:32.276465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.441 [2024-07-25 14:04:32.276477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.441 qpair failed and we were unable to recover it. 00:36:35.441 [2024-07-25 14:04:32.276723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.441 [2024-07-25 14:04:32.276735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.441 qpair failed and we were unable to recover it. 00:36:35.441 [2024-07-25 14:04:32.277022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.441 [2024-07-25 14:04:32.277035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.441 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.277284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.277297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.277644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.277671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.277934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.277975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.278371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.278383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.278725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.278738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.278969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.278981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.279246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.279259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.279451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.279463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.279693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.279705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.280010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.280023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.280264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.280276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.280573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.280585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.280918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.280930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.281144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.281157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.281484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.281496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.281739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.281751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.281991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.282003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.282260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.282272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.282503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.282515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.282814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.282826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.283010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.283022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.283212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.283252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.283633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.283673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.284112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.284189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.284695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.284782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.285106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.285129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.285515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.285557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.285828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.285868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.286167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.286180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.286337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.286349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.286619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.286658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.287011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.287052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.287275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.287287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.287552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.287564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.287809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.287822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.288054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.288066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.288293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.288306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.288496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.288509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.288780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.288821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.289207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.289257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.289424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.289436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.289663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.289676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.289972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.289985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.290300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.290312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.290675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.726 [2024-07-25 14:04:32.290739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.726 qpair failed and we were unable to recover it. 00:36:35.726 [2024-07-25 14:04:32.291139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.291180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.291456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.291495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.291832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.291872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.292113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.292152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.292455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.292495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.292793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.292833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.293085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.293125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.293513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.293554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.293914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.293954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.294202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.294241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.294359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.294371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.294621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.294634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.294864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.294876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.295074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.295086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.295335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.295374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.295754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.295794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.296179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.296220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.296518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.296530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.296853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.296866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.297116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.297129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.297363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.297377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.297619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.297631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.297896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.297908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.298076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.298089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.298331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.298371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.298674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.298722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.299047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.299059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.299303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.299316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.299544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.299556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.299901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.299913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.300078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.300090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.300409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.300421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.300722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.300734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.300901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.300913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.301109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.301121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.301457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.301497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.301841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.301883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.302164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.302176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.302470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.302483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.302678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.302690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.302945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.302958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.303212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.303252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.303634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.303674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.304064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.304104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.304506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.727 [2024-07-25 14:04:32.304547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.727 qpair failed and we were unable to recover it. 00:36:35.727 [2024-07-25 14:04:32.304792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.304834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.305137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.305149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.305397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.305410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.305509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.305521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.305761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.305773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.306015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.306027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.306274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.306286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.306488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.306501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.306756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.306797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.307151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.307164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.307584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.307623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.307943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.307983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.308342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.308382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.308748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.308789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.309087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.309099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.309346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.309360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.309591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.309603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.309857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.309870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.310049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.310061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.310287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.310299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.310463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.310475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.310820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.310832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.311106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.311146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.311387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.311427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.311805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.311846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.312137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.312177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.312532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.312572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.312885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.312927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.313313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.313353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.313677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.313724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.313976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.314016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.314250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.314262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.314538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.314551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.314855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.314868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.315122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.315134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.315371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.315383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.315727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.728 [2024-07-25 14:04:32.315739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.728 qpair failed and we were unable to recover it. 00:36:35.728 [2024-07-25 14:04:32.316087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.316127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.316289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.316328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.316496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.316535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.316833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.316874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.317189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.317229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.317533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.317545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.317653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.317664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.317895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.317907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.318137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.318150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.318379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.318392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.318707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.318724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.318966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.318978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.319301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.319313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.319414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.319426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.319658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.319670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.319967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.319979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.320262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.320274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.320537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.320549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.320793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.320808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.321053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.321065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.321363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.321375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.321549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.321562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.321881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.321893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.322132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.322144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.322412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.322424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.322598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.322610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.322796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.322808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.323126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.323166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.323554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.323593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.323975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.324016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.324388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.324428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.324792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.324833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.325161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.325174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.325470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.325482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.325724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.325737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.325931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.325943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.729 [2024-07-25 14:04:32.326250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.729 [2024-07-25 14:04:32.326290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.729 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.326586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.730 [2024-07-25 14:04:32.326626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.730 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.326958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.730 [2024-07-25 14:04:32.326999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.730 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.327253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.730 [2024-07-25 14:04:32.327266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.730 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.327587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.730 [2024-07-25 14:04:32.327599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.730 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.327799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.730 [2024-07-25 14:04:32.327811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.730 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.328039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.730 [2024-07-25 14:04:32.328051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.730 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.328289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.730 [2024-07-25 14:04:32.328301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.730 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.328654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.730 [2024-07-25 14:04:32.328667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.730 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.328866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.730 [2024-07-25 14:04:32.328879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.730 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.329121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.730 [2024-07-25 14:04:32.329133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.730 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.329405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.730 [2024-07-25 14:04:32.329417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.730 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.329659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.730 [2024-07-25 14:04:32.329672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.730 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.330015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.730 [2024-07-25 14:04:32.330027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.730 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.330376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.730 [2024-07-25 14:04:32.330389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.730 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.330642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.730 [2024-07-25 14:04:32.330654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.730 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.330851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.730 [2024-07-25 14:04:32.330864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.730 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.331091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.730 [2024-07-25 14:04:32.331104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.730 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.331342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.730 [2024-07-25 14:04:32.331355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.730 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.331547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.730 [2024-07-25 14:04:32.331559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.730 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.331803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.730 [2024-07-25 14:04:32.331816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.730 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.332093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.730 [2024-07-25 14:04:32.332106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.730 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.332404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.730 [2024-07-25 14:04:32.332418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.730 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.332666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.730 [2024-07-25 14:04:32.332678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.730 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.332929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.730 [2024-07-25 14:04:32.332941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.730 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.333248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.730 [2024-07-25 14:04:32.333288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.730 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.333578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.730 [2024-07-25 14:04:32.333618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.730 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.334025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.730 [2024-07-25 14:04:32.334065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.730 qpair failed and we were unable to recover it. 00:36:35.730 [2024-07-25 14:04:32.334470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.334510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.334889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.334929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.335239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.335278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.335669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.335708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.336008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.336048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.336378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.336391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.336737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.336749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.337017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.337056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.337381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.337421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.337681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.337732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.338051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.338090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.338478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.338518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.338880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.338921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.339298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.339338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.339675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.339727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.340115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.340155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.340446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.340486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.340711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.340758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.341129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.341169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.341427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.341439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.341761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.341773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.341878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.341889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.342084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.342097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.342356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.342404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.342643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.342683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.343001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.343042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.343278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.343317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.343674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.343726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.343970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.344010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.344310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.344322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.344658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.344698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.344931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.344971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.345330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.345369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.345740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.345782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.346164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.346209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.346570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.731 [2024-07-25 14:04:32.346610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.731 qpair failed and we were unable to recover it. 00:36:35.731 [2024-07-25 14:04:32.346917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.346959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.347214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.347226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.347420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.347432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.347670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.347682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.347924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.347964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.348249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.348289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.348581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.348621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.348853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.348894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.349195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.349235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.349552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.349564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.349670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.349682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.349844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.349856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.350185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.350224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.350607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.350646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.351019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.351060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.351419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.351471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.351854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.351894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.352249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.352261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.352580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.352592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.352757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.352769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.353087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.353099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.353420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.353432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.353680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.353692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.354003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.354043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.354369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.354409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.354741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.354783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.355119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.355159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.355487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.355526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.355838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.355880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.356211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.356251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.356608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.356648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.356974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.357015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.357319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.357358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.357668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.357708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.358010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.732 [2024-07-25 14:04:32.358050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.732 qpair failed and we were unable to recover it. 00:36:35.732 [2024-07-25 14:04:32.358431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.358471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.358857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.358898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.359202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.359214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.359537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.359551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.359849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.359861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.360059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.360071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.360366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.360378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.360630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.360669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.360923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.360963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.361340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.361379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.361785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.361826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.362137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.362176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.362468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.362508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.362776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.362788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.363140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.363180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.363448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.363489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.363794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.363834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.364222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.364262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.364676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.364723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.365035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.365074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.365298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.365339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.365735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.365777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.366145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.366185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.366440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.366480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.366862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.366903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.367291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.367331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.367616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.367628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.367858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.367870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.368102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.368114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.368313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.368325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.368656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.368696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.368871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.368911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.369236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.369276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.369666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.369706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.370028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.370075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.370305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.733 [2024-07-25 14:04:32.370317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.733 qpair failed and we were unable to recover it. 00:36:35.733 [2024-07-25 14:04:32.370484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.370495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.370818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.370859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.371264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.371303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.371686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.371738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.372060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.372099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.372484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.372523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.372910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.372951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.373310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.373355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.373639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.373651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.373830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.373843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.374091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.374103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.374366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.374406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.374723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.374764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.375000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.375039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.375358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.375397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.375689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.375737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.376045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.376084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.376378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.376418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.376787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.376799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.377070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.377109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.377496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.377536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.377879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.377920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.378225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.378238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.378493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.378505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.378740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.378753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.379095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.379107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.379298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.379309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.379634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.379646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.379926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.379938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.380168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.380180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.380356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.380368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.380566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.380605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.380966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.381007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.734 [2024-07-25 14:04:32.381391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.734 [2024-07-25 14:04:32.381430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.734 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.381801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.381843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.382207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.382246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.382557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.382596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.382887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.382927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.383229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.383269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.383492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.383531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.383877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.383902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.384283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.384323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.384675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.384740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.385031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.385071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.385417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.385429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.385731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.385744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.386077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.386117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.386455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.386499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.386901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.386941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.387324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.387364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.387618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.387630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.387878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.387891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.388059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.388072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.388406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.388445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.388776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.388816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.389122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.389161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.389470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.389510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.389685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.389734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.390116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.390156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.390515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.390554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.390927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.390968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.391273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.735 [2024-07-25 14:04:32.391286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.735 qpair failed and we were unable to recover it. 00:36:35.735 [2024-07-25 14:04:32.391605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.391617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.391787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.391800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.392041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.392054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.392278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.392290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.392528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.392540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.392728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.392741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.393039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.393051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.393283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.393295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.393591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.393603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.393874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.393886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.394184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.394196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.394458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.394470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.394794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.394807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.395035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.395048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.395372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.395384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.395627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.395639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.395880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.395892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.396079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.396092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.396320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.396332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.396572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.396611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.396995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.397036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.397417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.397458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.397851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.397892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.398273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.398313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.398649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.398681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.398971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.399018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.399386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.399427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.399605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.399645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.399878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.399918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.400282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.400321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.400621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.400633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.400880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.400893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.401217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.401257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.401560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.401600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.401987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.402028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.402411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.402451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.402835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.402876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.736 qpair failed and we were unable to recover it. 00:36:35.736 [2024-07-25 14:04:32.403262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.736 [2024-07-25 14:04:32.403301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.403611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.403651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.404073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.404114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.404421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.404461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.404788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.404828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.405135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.405175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.405489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.405528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.405821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.405878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.406191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.406232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.406463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.406475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.406773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.406796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.407026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.407038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.407335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.407347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.407612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.407623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.407815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.407827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.408146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.408160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.408496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.408508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.408807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.408820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.409079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.409091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.409331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.409343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.409589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.409600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.409919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.409932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.410214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.410226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.410412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.410424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.410654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.410666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.410910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.410923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.411176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.411188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.411433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.411445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.411766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.411779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.412035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.412048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.412223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.412235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.412427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.412439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.412751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.412763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.413009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.413021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.413351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.413392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.413799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.413840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.414147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.414187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.414496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.737 [2024-07-25 14:04:32.414536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.737 qpair failed and we were unable to recover it. 00:36:35.737 [2024-07-25 14:04:32.414838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.414878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.415259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.415299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.415540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.415580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.415943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.415984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.416256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.416296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.416535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.416574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.416867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.416879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.417215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.417255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.417571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.417610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.417933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.417975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.418336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.418376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.418681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.418727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.418949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.418989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.419296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.419336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.419653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.419692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.419924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.419964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.420292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.420329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.420574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.420588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.420758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.420770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.421090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.421102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.421427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.421440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.421760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.421773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.422040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.422080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.422388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.422427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.422787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.422828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.423158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.423198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.423503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.423542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.423835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.423848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.424090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.424103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.424343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.424366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.424661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.424673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.424916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.424929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.425230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.425242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.425482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.425494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.425800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.425812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.426002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.426014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.738 [2024-07-25 14:04:32.426332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.738 [2024-07-25 14:04:32.426343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.738 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.426641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.426653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.426902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.426914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.427151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.427163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.427480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.427491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.427720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.427733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.428062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.428074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.428319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.428331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.428604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.428615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.428790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.428802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.429100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.429112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.429360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.429372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.429602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.429614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.429812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.429824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.430134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.430174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.430533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.430568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.430916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.430928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.431096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.431109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.431430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.431442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.431765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.431778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.431964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.432004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.432331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.432376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.432737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.432750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.433103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.433144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.433459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.433498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.433874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.433887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.434213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.434253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.434627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.434667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.435006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.435046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.435453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.435496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.435818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.435830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.436126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.436139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.436319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.436331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.436626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.436639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.436922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.739 [2024-07-25 14:04:32.436962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.739 qpair failed and we were unable to recover it. 00:36:35.739 [2024-07-25 14:04:32.437350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.437390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.437692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.437704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.437885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.437897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.438169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.438181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.438481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.438493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.438670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.438682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.438932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.438944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.439196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.439208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.439450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.439462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.439723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.439735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.439977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.439989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.440285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.440298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.440568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.440580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.440850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.440903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.441238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.441279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.441516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.441528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.441779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.441792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.442053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.442065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.442317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.442330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.442496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.442508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.442812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.442853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.443238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.443278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.443613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.443652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.444044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.444084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.444384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.444424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.444806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.444847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.445231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.445276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.445661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.445701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.446045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.446085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.446470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.446510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.446865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.446878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.447203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.447243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.447550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.447590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.447972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.448012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.740 qpair failed and we were unable to recover it. 00:36:35.740 [2024-07-25 14:04:32.448308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.740 [2024-07-25 14:04:32.448348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.448658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.448699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.449033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.449073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.449317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.449357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.449725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.449766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.450055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.450095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.450500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.450540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.450945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.450986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.451277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.451318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.451643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.451655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.452002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.452015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.452207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.452220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.452447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.452459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.452703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.452723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.452976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.452988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.453243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.453254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.453494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.453506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.453814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.453827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.454074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.454086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.454416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.454456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.454771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.454812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.455116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.455169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.455350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.455362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.455689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.455736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.456031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.456071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.456437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.456478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.456859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.456900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.457298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.457338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.457697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.457744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.458047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.458087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.458413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.458453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.458836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.458877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.459238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.459284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.459652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.459692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.459960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.460000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.460420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.460459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.460751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.460792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.461095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.741 [2024-07-25 14:04:32.461135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.741 qpair failed and we were unable to recover it. 00:36:35.741 [2024-07-25 14:04:32.461516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.461556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.461867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.461879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.462055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.462067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.462384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.462396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.462744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.462757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.463005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.463017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.463268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.463304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.463631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.463670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.464046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.464086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.464416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.464456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.464762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.464802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.465186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.465226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.465608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.465648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.465989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.466001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.466346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.466386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.466731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.466771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.467105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.467144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.467467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.467506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.467889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.467902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.468219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.468258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.468499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.468539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.468864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.468905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.469157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.469197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.469480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.469492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.469740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.469753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.469948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.469960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.470130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.470142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.470469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.470507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.470866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.470906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.471200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.471239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.471569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.471608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.472020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.472061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.472460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.472499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.472818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.472857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.473171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.473216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.473598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.473638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.474044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.742 [2024-07-25 14:04:32.474056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.742 qpair failed and we were unable to recover it. 00:36:35.742 [2024-07-25 14:04:32.474233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.474245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.474516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.474528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.474867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.474880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.475186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.475225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.475538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.475577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.475958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.475999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.476255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.476295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.476682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.476729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.477115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.477154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.477495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.477535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.477894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.477934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.478301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.478341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.478576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.478588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.478836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.478848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.479088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.479100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.479340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.479352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.479583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.479595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.479892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.479905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.480155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.480167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.480408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.480420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.480586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.480597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.480769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.480782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.481052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.481093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.481392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.481431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.481686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.481744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.482072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.482111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.482414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.482453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.482792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.482833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.483215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.483255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.483550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.483562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.483744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.483757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.483942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.483982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.484307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.484346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.484732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.484773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.485015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.485055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.485449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.485488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.485886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.485926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.743 qpair failed and we were unable to recover it. 00:36:35.743 [2024-07-25 14:04:32.486308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.743 [2024-07-25 14:04:32.486353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.486633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.486645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.486826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.486839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.487043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.487055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.487308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.487347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.487578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.487618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.487995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.488008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.488302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.488314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.488564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.488597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.488842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.488882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.489225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.489265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.489583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.489622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.490005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.490046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.490448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.490488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.490790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.490802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.491076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.491088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.491439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.491478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.491790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.491831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.492159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.492199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.492501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.492541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.492789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.492801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.493151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.493190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.493495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.493535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.493766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.493806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.494111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.494150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.494500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.494541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.494895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.494908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.495257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.495269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.495578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.495618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.495912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.495925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.496201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.496213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.496457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.496469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.744 [2024-07-25 14:04:32.496794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.744 [2024-07-25 14:04:32.496806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.744 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.497060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.497100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.497423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.497462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.497793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.497834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.498091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.498130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.498490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.498530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.498855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.498895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.499145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.499184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.499569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.499615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.499934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.499947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.500245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.500257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.500526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.500539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.500787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.500799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.500982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.500995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.501239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.501278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.501604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.501643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.502032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.502045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.502244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.502256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.502579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.502591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.502859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.502871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.503191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.503203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.503530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.503569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.503906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.503948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.504337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.504377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.504704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.504750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.505136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.505176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.505479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.505518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.505886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.505927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.506173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.506213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.506517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.506557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.506922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.506934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.507191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.507230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.507528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.507567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.507928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.507940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.508121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.508134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.508325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.508337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.508535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.508547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.508739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.745 [2024-07-25 14:04:32.508752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.745 qpair failed and we were unable to recover it. 00:36:35.745 [2024-07-25 14:04:32.508982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.508994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.509226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.509238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.509484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.509496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.509803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.509816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.510058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.510070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.510317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.510330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.510648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.510660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.510885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.510897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.511127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.511140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.511388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.511427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.511737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.511783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.512182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.512223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.512449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.512488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.512813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.512826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.513071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.513084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.513329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.513341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.513637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.513649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.513889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.513902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.514146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.514158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.514415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.514427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.514747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.514760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.514946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.514958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.515198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.515210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.515455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.515467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.515766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.515779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.516025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.516038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.516307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.516319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.516624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.516636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.516862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.516875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.517072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.517085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.517264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.517276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.517469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.517509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.517890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.517931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.518253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.518293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.518603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.518643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.519052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.519092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.519421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.519460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.746 qpair failed and we were unable to recover it. 00:36:35.746 [2024-07-25 14:04:32.519869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.746 [2024-07-25 14:04:32.519948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.520352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.520397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.520654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.520704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.521042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.521056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.521227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.521239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.521535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.521547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.521785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.521798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.522042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.522053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.522374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.522386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.522641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.522653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.522948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.522961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.523257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.523270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.523565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.523577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.523770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.523784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.523972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.523983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.524306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.524319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.524506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.524519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.524701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.524716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.524883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.524896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.525145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.525184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.525510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.525549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.525838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.525879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.526177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.526216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.526510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.526550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.526957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.526998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.527299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.527339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.527665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.527677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.527907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.527920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.528155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.528167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.528516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.528555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.528856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.528896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.529143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.529183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.529569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.529608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.529999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.530039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.530366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.530406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.530618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.530630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.530876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.747 [2024-07-25 14:04:32.530889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.747 qpair failed and we were unable to recover it. 00:36:35.747 [2024-07-25 14:04:32.531073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.531085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.531388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.531400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.531639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.531651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.531882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.531959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.532315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.532359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.532645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.532662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.532901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.532918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.533173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.533189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.533432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.533448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.533731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.533748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.534120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.534160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.534492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.534531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.534886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.534927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.535241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.535281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.535579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.535618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.536002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.536043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.536404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.536453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.536815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.536832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.537204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.537243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.537495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.537534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.537797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.537838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.538120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.538136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.538377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.538394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.538560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.538576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.538828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.538842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.539086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.539099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.539233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.539246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.539554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.539566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.539897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.539937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.540130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.540170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.540433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.540473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.540676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.540688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.540884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.540896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.541160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.541200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.541559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.541599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.541877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.541890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.542066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.542078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.748 [2024-07-25 14:04:32.542342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.748 [2024-07-25 14:04:32.542354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.748 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.542597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.542609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.542928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.542941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.543261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.543273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.543598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.543637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.544053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.544094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.544454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.544504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.544823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.544836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.545133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.545172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.545492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.545531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.545907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.545920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.546125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.546164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.546521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.546561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.546880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.546892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.547074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.547087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.547406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.547418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.547646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.547658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.547893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.547906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.548156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.548169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.548395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.548407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.548651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.548663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.548921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.548934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.549229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.549242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.549483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.549496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.549761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.549774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.550018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.550030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.550328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.550341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.550580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.550592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.550911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.550924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.551160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.551173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.551493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.551505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.551760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.551773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.552082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.552094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.749 [2024-07-25 14:04:32.552294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.749 [2024-07-25 14:04:32.552306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.749 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.552546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.552558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.552810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.552823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.553088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.553100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.553334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.553346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.553586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.553598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.553850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.553862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.554088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.554101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.554284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.554296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.554478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.554491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.554679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.554727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.555113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.555153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.555532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.555572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.555833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.555847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.556090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.556103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.556427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.556439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.556682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.556694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.557084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.557126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.557431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.557471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.557795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.557836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.558068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.558107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.558493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.558532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.558842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.558854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.559085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.559097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.559343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.559355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.559727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.559768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.560145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.560185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.560572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.560613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.560920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.560932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.561250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.561262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.561590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.561629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.561795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.561836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.562220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.562260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.562553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.562593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.562904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.562945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.563328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.563368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.563755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.563796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.564108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.564148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.750 qpair failed and we were unable to recover it. 00:36:35.750 [2024-07-25 14:04:32.564439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.750 [2024-07-25 14:04:32.564479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.564810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.564851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.565258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.565298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.565670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.565710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.566033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.566072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.566376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.566416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.566654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.566693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.566987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.567000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.567273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.567286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.567583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.567595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.567915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.567928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.568229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.568241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.568412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.568424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.568651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.568663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.568944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.568956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.569134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.569148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.569477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.569517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.569840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.569880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.570207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.570220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.570462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.570474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.570791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.570803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.570997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.571009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.571330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.571342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.571601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.571612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.571877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.571890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.572141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.572153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.572381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.572393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.572498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.572509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.572764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.572776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.572956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.572968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.573221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.573233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.573554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.573566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.573809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.573821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.574164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.574201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.574558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.574598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.574898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.574911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.575198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.751 [2024-07-25 14:04:32.575237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.751 qpair failed and we were unable to recover it. 00:36:35.751 [2024-07-25 14:04:32.575615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.575654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.575850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.575862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.576181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.576194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.576467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.576479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.576724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.576736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.576971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.576984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.577227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.577240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.577489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.577501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.577756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.577768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.578038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.578050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.578317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.578329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.578649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.578661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.578911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.578923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.579167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.579179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.579489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.579529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.579911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.579952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.580258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.580270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.580529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.580542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.580735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.580750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.581014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.581027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.581279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.581292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.582557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.582581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.582843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.582877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.583163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.583182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.583443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.583483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.583900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.583944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.584201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.584218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.584446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.584462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.584672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.584688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.585005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.585023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.585260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.585276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.585635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.585675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.586065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.586106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.586443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.586483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.586835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.586852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.587114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.587131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.587441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.587458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.587789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.752 [2024-07-25 14:04:32.587806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.752 qpair failed and we were unable to recover it. 00:36:35.752 [2024-07-25 14:04:32.587995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.588011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.588261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.588301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.588538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.588577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.588881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.588922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.589226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.589265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.589491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.589531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.589917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.589957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.590265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.590285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.590494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.590511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.590789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.590806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.591143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.591180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.591410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.591450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.591819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.591861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.592177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.592217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.592547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.592586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.592996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.593013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.593328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.593368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.593698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.593747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.594162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.594202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.594429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.594469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.594850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.594890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.595207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.595247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.595483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.595523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.595822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.595839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.596186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.596202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.596446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.596462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.596783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.596800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.597056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.597072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.597257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.597274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.597549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.597566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.597761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.597779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.598023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.598040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.598286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.598302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.598598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.753 [2024-07-25 14:04:32.598615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.753 qpair failed and we were unable to recover it. 00:36:35.753 [2024-07-25 14:04:32.598928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.754 [2024-07-25 14:04:32.598945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.754 qpair failed and we were unable to recover it. 00:36:35.754 [2024-07-25 14:04:32.599075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.754 [2024-07-25 14:04:32.599091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.754 qpair failed and we were unable to recover it. 00:36:35.754 [2024-07-25 14:04:32.599207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.754 [2024-07-25 14:04:32.599223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:35.754 qpair failed and we were unable to recover it. 00:36:36.028 [2024-07-25 14:04:32.599576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.028 [2024-07-25 14:04:32.599594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.028 qpair failed and we were unable to recover it. 00:36:36.028 [2024-07-25 14:04:32.599791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.028 [2024-07-25 14:04:32.599831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.028 qpair failed and we were unable to recover it. 00:36:36.028 [2024-07-25 14:04:32.600119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.028 [2024-07-25 14:04:32.600133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.028 qpair failed and we were unable to recover it. 00:36:36.028 [2024-07-25 14:04:32.600314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.028 [2024-07-25 14:04:32.600327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.028 qpair failed and we were unable to recover it. 00:36:36.028 [2024-07-25 14:04:32.600635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.028 [2024-07-25 14:04:32.600647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.028 qpair failed and we were unable to recover it. 00:36:36.028 [2024-07-25 14:04:32.600826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.028 [2024-07-25 14:04:32.600840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.028 qpair failed and we were unable to recover it. 00:36:36.028 [2024-07-25 14:04:32.601110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.028 [2024-07-25 14:04:32.601123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.028 qpair failed and we were unable to recover it. 00:36:36.028 [2024-07-25 14:04:32.601377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.028 [2024-07-25 14:04:32.601416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.028 qpair failed and we were unable to recover it. 00:36:36.028 [2024-07-25 14:04:32.601733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.028 [2024-07-25 14:04:32.601774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.028 qpair failed and we were unable to recover it. 00:36:36.028 [2024-07-25 14:04:32.602020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.028 [2024-07-25 14:04:32.602032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.028 qpair failed and we were unable to recover it. 00:36:36.028 [2024-07-25 14:04:32.602308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.028 [2024-07-25 14:04:32.602321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.028 qpair failed and we were unable to recover it. 00:36:36.028 [2024-07-25 14:04:32.602502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.028 [2024-07-25 14:04:32.602515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.028 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.602708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.602772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.603082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.603122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.603452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.603492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.603779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.603820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.604044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.604057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.604353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.604365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.604647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.604659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.604896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.604909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.605235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.605247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.605478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.605490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.605736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.605748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.606068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.606081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.606380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.606394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.606669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.606709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.607079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.607119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.607478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.607517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.607743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.607784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.608173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.608213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.608570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.608609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.609005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.609045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.609413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.609452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.609745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.609786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.610087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.610127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.610461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.610500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.610856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.610869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.611145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.611157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.611379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.611392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.611689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.611702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.612026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.612038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.612281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.612312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.612570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.612609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.612972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.613012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.613331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.613371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.613663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.613703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.614030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.614072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.614458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.614498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.029 [2024-07-25 14:04:32.614904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.029 [2024-07-25 14:04:32.614945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.029 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.615327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.615367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.615738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.615779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.616036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.616078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.616461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.616501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.616895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.616937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.617217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.617229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.617485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.617498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.617701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.617718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.617879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.617891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.618245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.618285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.618589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.618628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.618936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.618949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.619214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.619227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.619471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.619483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.619780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.619793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.620040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.620054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.620271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.620283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.620546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.620558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.620796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.620809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.621078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.621090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.621325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.621337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.621429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.621441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.621749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.621789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.622094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.622134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.622460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.622499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.622743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.622796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.623119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.623131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.623388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.623433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.623814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.623855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.624270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.624282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.624579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.624591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.624789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.624801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.624997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.625009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.625330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.625343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.625522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.625535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.625783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.625796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.030 [2024-07-25 14:04:32.625993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.030 [2024-07-25 14:04:32.626005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.030 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.626240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.626252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.626443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.626455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.626620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.626631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.626929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.626940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.627124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.627135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.627429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.627464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.627689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.627706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.627923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.627940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.628202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.628218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.628494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.628510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.628819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.628835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.629015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.629031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.629308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.629325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.629656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.629671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.629863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.629879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.630135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.630151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.630322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.630338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.630522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.630535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.630835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.630848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.631044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.631056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.631255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.631267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.631458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.631469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.631748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.631760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.631977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.631989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.632228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.632239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.632541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.632552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.632730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.632742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.632917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.632928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.633117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.633128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.633357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.633368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.633632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.633643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.633826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.633838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.634086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.634097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.634401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.634412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.634571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.634583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.634831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.634844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.635020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.031 [2024-07-25 14:04:32.635032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.031 qpair failed and we were unable to recover it. 00:36:36.031 [2024-07-25 14:04:32.635197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.635209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.635391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.635404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.635698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.635710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.635983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.635997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.636163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.636176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.636338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.636350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.636525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.636537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.636796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.636809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.637107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.637121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.637282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.637296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.637466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.637478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.637775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.637787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.637966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.637978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.638171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.638183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.638346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.638358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.638532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.638544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.638822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.638835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.639081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.639093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.639325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.639337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.639585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.639597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.639897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.639910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.640147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.640159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.640334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.640347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.640648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.640660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.640841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.640854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.640959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.640970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.641197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.641210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.641506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.641518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.641771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.641784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.641967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.641979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.642305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.642318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.642498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.642510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.642604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.642616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.642938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.642951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.643134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.643146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.643342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.643354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.643590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.032 [2024-07-25 14:04:32.643602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.032 qpair failed and we were unable to recover it. 00:36:36.032 [2024-07-25 14:04:32.643787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.643799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.644030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.644042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.644287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.644299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.644580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.644593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.644843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.644855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.645033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.645045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.645255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.645267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.645511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.645523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.645721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.645733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.645965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.645978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.646155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.646168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.646502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.646516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.646746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.646758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.647104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.647117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.647413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.647425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.647589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.647601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.647872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.647884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.648045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.648058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.648241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.648253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.648508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.648520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.648821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.648833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.649064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.649076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.649277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.649289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.649462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.649474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.649750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.649763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.649999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.650011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.650238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.650250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.650433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.650445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.650670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.650682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.650959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.033 [2024-07-25 14:04:32.650972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.033 qpair failed and we were unable to recover it. 00:36:36.033 [2024-07-25 14:04:32.651145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.651157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.651388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.651400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.651698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.651710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.651878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.651890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.652136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.652148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.652379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.652391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.652671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.652683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.653010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.653023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.653196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.653208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.653450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.653462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.653791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.653804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.654058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.654070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.654299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.654311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.654568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.654580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.654771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.654783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.655028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.655040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.655287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.655299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.655542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.655554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.655802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.655814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.655992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.656004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.656237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.656249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.656334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.656348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.656517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.656530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.656759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.656771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.657090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.657103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.657336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.657348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.657587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.657599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.657902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.657915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.658146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.658158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.658387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.658400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.658640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.658652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.658975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.658987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.659218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.659231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.659461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.659473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.659647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.659659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.659902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.659915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.660096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.660109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.034 [2024-07-25 14:04:32.660426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.034 [2024-07-25 14:04:32.660438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.034 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.660600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.660612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.660842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.660854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.661085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.661097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.661275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.661287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.661522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.661534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.661697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.661710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.662014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.662026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.662215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.662227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.662470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.662482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.662801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.662813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.662993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.663006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.663314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.663326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.663425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.663437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.663668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.663681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.663865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.663877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.664122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.664135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.664443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.664455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.664621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.664633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.664788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.664801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.665033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.665045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.665275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.665287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.665471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.665483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.665711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.665726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.665955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.665969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.666215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.666227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.666427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.666439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.666617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.666629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.666881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.666893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.667131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.667143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.667381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.667393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.667668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.667680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.667955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.667967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.668290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.668302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.668458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.668470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.668750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.668763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.668925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.668937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.035 [2024-07-25 14:04:32.669168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.035 [2024-07-25 14:04:32.669181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.035 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.669459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.669471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.669568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.669580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.669826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.669838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.670106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.670118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.670440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.670452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.670681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.670693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.670967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.670979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.671223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.671235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.671326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.671337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.671600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.671613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.671867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.671879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.671979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.671991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.672244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.672256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.672448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.672460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.672790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.672802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.673031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.673043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.673278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.673290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.673550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.673562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.673790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.673803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.673984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.673996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.674323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.674335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.674590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.674602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.674897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.674909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.675084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.675096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.675277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.675290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.675588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.675600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.675843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.675859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.676200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.676212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.676530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.676542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.676862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.676874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.677117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.677129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.677438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.677450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.677769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.677781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.678029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.678041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.678366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.678379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.678679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.678692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.678869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.036 [2024-07-25 14:04:32.678881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.036 qpair failed and we were unable to recover it. 00:36:36.036 [2024-07-25 14:04:32.679113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.679125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.679425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.679437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.679682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.679695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.679982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.679995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.680267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.680280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.680525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.680537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.680701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.680719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.681044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.681057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.681304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.681316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.681567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.681579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.681924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.681936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.682180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.682193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.682486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.682498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.682764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.682777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.683007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.683019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.683266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.683278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.683545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.683580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.683873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.683895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.684207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.684224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.684404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.684421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.684676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.684693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.684939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.684956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.685144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.685157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.685387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.685400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.685640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.685653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.685921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.685933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.686160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.686173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.686441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.686453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.686695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.686708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.687050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.687063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.687246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.687258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.687577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.687590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.687771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.687783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.688083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.688095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.688327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.688340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.037 [2024-07-25 14:04:32.688525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.037 [2024-07-25 14:04:32.688538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.037 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.688856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.688869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.689112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.689124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.689396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.689408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.689655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.689667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.689854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.689866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.690097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.690109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.690292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.690305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.690627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.690640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.690817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.690830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.691084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.691096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.691255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.691268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.691421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.691433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.691759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.691772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.692097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.692110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.692376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.692388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.692647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.692659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.692900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.692912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.693179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.693191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.693371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.693383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.693552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.693565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.693900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.693915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.694166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.694178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.694467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.694479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.694803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.694816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.695007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.695020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.695340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.695352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.695601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.695613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.695909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.695922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.696165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.696178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.696475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.696487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.696785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.696798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.697097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.038 [2024-07-25 14:04:32.697109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.038 qpair failed and we were unable to recover it. 00:36:36.038 [2024-07-25 14:04:32.697343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.697355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.697650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.697662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.697906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.697918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.698102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.698115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.698363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.698376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.698555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.698567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.698890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.698902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.699219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.699232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.699403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.699415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.699579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.699591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.699915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.699927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.700250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.700262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.700584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.700596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.700926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.700939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.701124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.701136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.701375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.701387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.701697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.701709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.702012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.702024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.702350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.702362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.702658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.702670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.702904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.702916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.703163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.703175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.703434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.703446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.703706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.703733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.704056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.704068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.704301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.704313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.704633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.704645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.704946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.704959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.705210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.705224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.705477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.705489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.705826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.705838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.706082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.706094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.706390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.706402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.706719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.706732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.707028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.707041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.039 qpair failed and we were unable to recover it. 00:36:36.039 [2024-07-25 14:04:32.707231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.039 [2024-07-25 14:04:32.707243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.040 qpair failed and we were unable to recover it. 00:36:36.040 [2024-07-25 14:04:32.707538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.040 [2024-07-25 14:04:32.707551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.040 qpair failed and we were unable to recover it. 00:36:36.040 [2024-07-25 14:04:32.707794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.040 [2024-07-25 14:04:32.707806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.040 qpair failed and we were unable to recover it. 00:36:36.040 [2024-07-25 14:04:32.707990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.040 [2024-07-25 14:04:32.708002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.040 qpair failed and we were unable to recover it. 00:36:36.040 [2024-07-25 14:04:32.708256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.040 [2024-07-25 14:04:32.708268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.040 qpair failed and we were unable to recover it. 00:36:36.040 [2024-07-25 14:04:32.708510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.040 [2024-07-25 14:04:32.708522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.040 qpair failed and we were unable to recover it. 00:36:36.040 [2024-07-25 14:04:32.708750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.040 [2024-07-25 14:04:32.708762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.040 qpair failed and we were unable to recover it. 00:36:36.040 [2024-07-25 14:04:32.709008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.040 [2024-07-25 14:04:32.709020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.040 qpair failed and we were unable to recover it. 00:36:36.040 [2024-07-25 14:04:32.709341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.040 [2024-07-25 14:04:32.709353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.040 qpair failed and we were unable to recover it. 00:36:36.040 [2024-07-25 14:04:32.709607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.040 [2024-07-25 14:04:32.709619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.040 qpair failed and we were unable to recover it. 00:36:36.040 [2024-07-25 14:04:32.709893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.040 [2024-07-25 14:04:32.709906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.040 qpair failed and we were unable to recover it. 00:36:36.040 [2024-07-25 14:04:32.710243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.040 [2024-07-25 14:04:32.710255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.040 qpair failed and we were unable to recover it. 00:36:36.040 [2024-07-25 14:04:32.710579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.040 [2024-07-25 14:04:32.710591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.040 qpair failed and we were unable to recover it. 00:36:36.040 [2024-07-25 14:04:32.710916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.040 [2024-07-25 14:04:32.710928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.040 qpair failed and we were unable to recover it. 00:36:36.040 [2024-07-25 14:04:32.711227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.040 [2024-07-25 14:04:32.711240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.040 qpair failed and we were unable to recover it. 00:36:36.040 [2024-07-25 14:04:32.711428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.040 [2024-07-25 14:04:32.711440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.040 qpair failed and we were unable to recover it. 00:36:36.040 [2024-07-25 14:04:32.711672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.040 [2024-07-25 14:04:32.711685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.040 qpair failed and we were unable to recover it. 00:36:36.040 [2024-07-25 14:04:32.711999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.040 [2024-07-25 14:04:32.712012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.040 qpair failed and we were unable to recover it. 00:36:36.040 [2024-07-25 14:04:32.712190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.040 [2024-07-25 14:04:32.712202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.040 qpair failed and we were unable to recover it. 00:36:36.040 [2024-07-25 14:04:32.712468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.040 [2024-07-25 14:04:32.712480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.040 qpair failed and we were unable to recover it. 00:36:36.040 [2024-07-25 14:04:32.712802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.040 [2024-07-25 14:04:32.712814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.040 qpair failed and we were unable to recover it. 00:36:36.040 [2024-07-25 14:04:32.713075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.040 [2024-07-25 14:04:32.713088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.040 qpair failed and we were unable to recover it. 00:36:36.040 [2024-07-25 14:04:32.713385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.040 [2024-07-25 14:04:32.713397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.040 qpair failed and we were unable to recover it. 00:36:36.040 [2024-07-25 14:04:32.713637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.040 [2024-07-25 14:04:32.713648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.040 qpair failed and we were unable to recover it. 00:36:36.040 [2024-07-25 14:04:32.713969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.040 [2024-07-25 14:04:32.713981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.040 qpair failed and we were unable to recover it. 00:36:36.040 [2024-07-25 14:04:32.714223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.040 [2024-07-25 14:04:32.714235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.040 qpair failed and we were unable to recover it. 00:36:36.040 [2024-07-25 14:04:32.714478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.040 [2024-07-25 14:04:32.714490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.040 qpair failed and we were unable to recover it. 00:36:36.041 [2024-07-25 14:04:32.714787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.041 [2024-07-25 14:04:32.714800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.041 qpair failed and we were unable to recover it. 00:36:36.041 [2024-07-25 14:04:32.715118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.041 [2024-07-25 14:04:32.715131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.041 qpair failed and we were unable to recover it. 00:36:36.041 [2024-07-25 14:04:32.715379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.041 [2024-07-25 14:04:32.715391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.041 qpair failed and we were unable to recover it. 00:36:36.041 [2024-07-25 14:04:32.715639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.041 [2024-07-25 14:04:32.715651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.041 qpair failed and we were unable to recover it. 00:36:36.041 [2024-07-25 14:04:32.715879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.041 [2024-07-25 14:04:32.715891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.041 qpair failed and we were unable to recover it. 00:36:36.041 [2024-07-25 14:04:32.716072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.041 [2024-07-25 14:04:32.716084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.041 qpair failed and we were unable to recover it. 00:36:36.041 [2024-07-25 14:04:32.716310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.041 [2024-07-25 14:04:32.716324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.041 qpair failed and we were unable to recover it. 00:36:36.041 [2024-07-25 14:04:32.716552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.041 [2024-07-25 14:04:32.716564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.041 qpair failed and we were unable to recover it. 00:36:36.041 [2024-07-25 14:04:32.716827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.041 [2024-07-25 14:04:32.716840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.041 qpair failed and we were unable to recover it. 00:36:36.041 [2024-07-25 14:04:32.717021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.041 [2024-07-25 14:04:32.717034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.041 qpair failed and we were unable to recover it. 00:36:36.041 [2024-07-25 14:04:32.717304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.041 [2024-07-25 14:04:32.717317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.041 qpair failed and we were unable to recover it. 00:36:36.041 [2024-07-25 14:04:32.717576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.041 [2024-07-25 14:04:32.717588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.041 qpair failed and we were unable to recover it. 00:36:36.041 [2024-07-25 14:04:32.717953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.041 [2024-07-25 14:04:32.717966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.041 qpair failed and we were unable to recover it. 00:36:36.041 [2024-07-25 14:04:32.718217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.041 [2024-07-25 14:04:32.718229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.041 qpair failed and we were unable to recover it. 00:36:36.041 [2024-07-25 14:04:32.718542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.041 [2024-07-25 14:04:32.718554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.041 qpair failed and we were unable to recover it. 00:36:36.041 [2024-07-25 14:04:32.718809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.041 [2024-07-25 14:04:32.718821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.041 qpair failed and we were unable to recover it. 00:36:36.041 [2024-07-25 14:04:32.719065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.041 [2024-07-25 14:04:32.719077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.041 qpair failed and we were unable to recover it. 00:36:36.041 [2024-07-25 14:04:32.719260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.041 [2024-07-25 14:04:32.719273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.041 qpair failed and we were unable to recover it. 00:36:36.041 [2024-07-25 14:04:32.719471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.041 [2024-07-25 14:04:32.719483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.041 qpair failed and we were unable to recover it. 00:36:36.041 [2024-07-25 14:04:32.719646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.041 [2024-07-25 14:04:32.719658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.041 qpair failed and we were unable to recover it. 00:36:36.041 [2024-07-25 14:04:32.719885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.041 [2024-07-25 14:04:32.719898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.041 qpair failed and we were unable to recover it. 00:36:36.041 [2024-07-25 14:04:32.720217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.041 [2024-07-25 14:04:32.720229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.041 qpair failed and we were unable to recover it. 00:36:36.041 [2024-07-25 14:04:32.720456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.041 [2024-07-25 14:04:32.720469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.041 qpair failed and we were unable to recover it. 00:36:36.041 [2024-07-25 14:04:32.720670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.041 [2024-07-25 14:04:32.720682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.041 qpair failed and we were unable to recover it. 00:36:36.041 [2024-07-25 14:04:32.720909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.041 [2024-07-25 14:04:32.720922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.041 qpair failed and we were unable to recover it. 00:36:36.041 [2024-07-25 14:04:32.721170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.041 [2024-07-25 14:04:32.721182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.041 qpair failed and we were unable to recover it. 00:36:36.042 [2024-07-25 14:04:32.721406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.042 [2024-07-25 14:04:32.721418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.042 qpair failed and we were unable to recover it. 00:36:36.042 [2024-07-25 14:04:32.721722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.042 [2024-07-25 14:04:32.721734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.042 qpair failed and we were unable to recover it. 00:36:36.042 [2024-07-25 14:04:32.721958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.042 [2024-07-25 14:04:32.721970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.042 qpair failed and we were unable to recover it. 00:36:36.042 [2024-07-25 14:04:32.722270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.042 [2024-07-25 14:04:32.722283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.042 qpair failed and we were unable to recover it. 00:36:36.042 [2024-07-25 14:04:32.722533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.042 [2024-07-25 14:04:32.722545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.042 qpair failed and we were unable to recover it. 00:36:36.042 [2024-07-25 14:04:32.722812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.042 [2024-07-25 14:04:32.722825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.042 qpair failed and we were unable to recover it. 00:36:36.042 [2024-07-25 14:04:32.723133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.042 [2024-07-25 14:04:32.723146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.042 qpair failed and we were unable to recover it. 00:36:36.042 [2024-07-25 14:04:32.723448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.042 [2024-07-25 14:04:32.723460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.042 qpair failed and we were unable to recover it. 00:36:36.042 [2024-07-25 14:04:32.723758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.042 [2024-07-25 14:04:32.723770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.042 qpair failed and we were unable to recover it. 00:36:36.042 [2024-07-25 14:04:32.724022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.042 [2024-07-25 14:04:32.724034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.042 qpair failed and we were unable to recover it. 00:36:36.042 [2024-07-25 14:04:32.724353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.042 [2024-07-25 14:04:32.724365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.042 qpair failed and we were unable to recover it. 00:36:36.042 [2024-07-25 14:04:32.724687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.042 [2024-07-25 14:04:32.724700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.042 qpair failed and we were unable to recover it. 00:36:36.042 [2024-07-25 14:04:32.724931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.042 [2024-07-25 14:04:32.724943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.042 qpair failed and we were unable to recover it. 00:36:36.042 [2024-07-25 14:04:32.725259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.042 [2024-07-25 14:04:32.725272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.042 qpair failed and we were unable to recover it. 00:36:36.042 [2024-07-25 14:04:32.725592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.042 [2024-07-25 14:04:32.725605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.042 qpair failed and we were unable to recover it. 00:36:36.042 [2024-07-25 14:04:32.725927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.042 [2024-07-25 14:04:32.725939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.042 qpair failed and we were unable to recover it. 00:36:36.042 [2024-07-25 14:04:32.726249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.042 [2024-07-25 14:04:32.726262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.042 qpair failed and we were unable to recover it. 00:36:36.042 [2024-07-25 14:04:32.726580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.042 [2024-07-25 14:04:32.726592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.042 qpair failed and we were unable to recover it. 00:36:36.042 [2024-07-25 14:04:32.726852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.042 [2024-07-25 14:04:32.726864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.042 qpair failed and we were unable to recover it. 00:36:36.042 [2024-07-25 14:04:32.727107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.042 [2024-07-25 14:04:32.727119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.042 qpair failed and we were unable to recover it. 00:36:36.042 [2024-07-25 14:04:32.727416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.042 [2024-07-25 14:04:32.727430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.042 qpair failed and we were unable to recover it. 00:36:36.042 [2024-07-25 14:04:32.727728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.042 [2024-07-25 14:04:32.727741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.042 qpair failed and we were unable to recover it. 00:36:36.042 [2024-07-25 14:04:32.728048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.042 [2024-07-25 14:04:32.728060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.042 qpair failed and we were unable to recover it. 00:36:36.042 [2024-07-25 14:04:32.728304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.042 [2024-07-25 14:04:32.728316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.042 qpair failed and we were unable to recover it. 00:36:36.042 [2024-07-25 14:04:32.728649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.042 [2024-07-25 14:04:32.728661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.042 qpair failed and we were unable to recover it. 00:36:36.042 [2024-07-25 14:04:32.728902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.042 [2024-07-25 14:04:32.728914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.042 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.729159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.729171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.729413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.729426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.729651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.729663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.729913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.729925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.730157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.730169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.730465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.730478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.730799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.730812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.731042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.731055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.731331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.731343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.731639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.731651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.731970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.731982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.732280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.732292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.732538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.732550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.732803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.732815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.733157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.733170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.733408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.733420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.733672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.733684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.733876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.733889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.734132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.734144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.734398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.734411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.734649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.734662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.734982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.734995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.735184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.735196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.735535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.735548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.735870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.735882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.736204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.736217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.736560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.736572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.736870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.736882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.043 qpair failed and we were unable to recover it. 00:36:36.043 [2024-07-25 14:04:32.737129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.043 [2024-07-25 14:04:32.737141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.737434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.737446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.737741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.737753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.738073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.738086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.738313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.738325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.738505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.738517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.738765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.738779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.739022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.739034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.739275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.739287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.739526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.739538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.739702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.739719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.739947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.739960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.740217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.740229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.740548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.740560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.740743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.740755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.741010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.741022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.741251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.741263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.741584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.741596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.741842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.741855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.742097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.742109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.742367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.742379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.742562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.742575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.742899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.742911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.743168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.743180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.743502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.743514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.743784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.743796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.744094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.744106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.744430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.744442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.744692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.744704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.744954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.744966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.745159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.745171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.745422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.745434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.745609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.745621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.745952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.745964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.746133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.746145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.746442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.746454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.746753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.746766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.747064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.044 [2024-07-25 14:04:32.747077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.044 qpair failed and we were unable to recover it. 00:36:36.044 [2024-07-25 14:04:32.747308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.747320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.747616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.747628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.747925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.747937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.748204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.748216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.748459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.748472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.748766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.748779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.749027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.749039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.749336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.749348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.749604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.749617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.749878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.749890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.750094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.750106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.750406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.750418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.750752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.750765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.751098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.751110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.751345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.751357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.751605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.751618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.751860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.751873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.752146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.752158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.752326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.752339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.752656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.752668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.752984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.752997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.753257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.753269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.753530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.753543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.753808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.753820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.754050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.754063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.754325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.754337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.754569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.754581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.754760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.754773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.755029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.755042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.755351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.755363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.755660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.755672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.755994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.756006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.756327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.756339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.756611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.756624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.756819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.756832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.757159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.757171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.757470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.045 [2024-07-25 14:04:32.757482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.045 qpair failed and we were unable to recover it. 00:36:36.045 [2024-07-25 14:04:32.757801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.757814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.758047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.758060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.758330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.758343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.758641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.758653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.758919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.758931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.759234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.759246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.759487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.759499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.759805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.759818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.760114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.760126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.760356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.760368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.760606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.760618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.760916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.760930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.761255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.761267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.761523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.761535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.761841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.761854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.762155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.762167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.762399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.762412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.762709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.762733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.762993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.763006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.763254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.763267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.763586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.763598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.763788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.763801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.763997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.764009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.764326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.764338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.764530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.764542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.764795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.764808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.764985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.764997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.765251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.765263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.765523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.765535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.765783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.765796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.766024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.766037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.766306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.766319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.766512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.766524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.766821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.766833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.767066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.767079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.767331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.767343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.767667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.046 [2024-07-25 14:04:32.767679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.046 qpair failed and we were unable to recover it. 00:36:36.046 [2024-07-25 14:04:32.767906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.767918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.768187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.768200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.768523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.768535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.768795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.768808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.769128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.769140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.769461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.769473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.769635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.769647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.769917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.769929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.770172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.770184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.770441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.770453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.770774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.770786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.771036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.771048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.771242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.771254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.771575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.771587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.771903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.771918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.772151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.772163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.772409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.772422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.772743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.772756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.772925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.772937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.773237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.773249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.773587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.773599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.773762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.773774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.774026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.774038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.774337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.774349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.774591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.774603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.774910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.774923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.775151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.775163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.775404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.775416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.775578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.775590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.775835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.775848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.776077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.776089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.776330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.776344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.776590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.776603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.776785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.776798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.777119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.777132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.777316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.777329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.777601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.047 [2024-07-25 14:04:32.777614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.047 qpair failed and we were unable to recover it. 00:36:36.047 [2024-07-25 14:04:32.777774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.777787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.778027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.778039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.778215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.778227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.778478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.778491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.778736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.778748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.778991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.779004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.779260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.779273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.779510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.779522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.779843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.779856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.780177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.780190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.780502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.780514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.780783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.780796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.780974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.780987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.781228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.781240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.781540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.781552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.781747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.781760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.782008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.782021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.782316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.782330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.782564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.782576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.782849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.782861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.783164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.783176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.783442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.783454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.783751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.783764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.784011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.784024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.784261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.784274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.784546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.784558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.784725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.784738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.785035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.785048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.785161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.785173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.785406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.785418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.048 [2024-07-25 14:04:32.785595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.048 [2024-07-25 14:04:32.785607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.048 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.785902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.785914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.786156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.786169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.786344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.786356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.786587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.786599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.786839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.786852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.787082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.787095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.787326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.787338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.787603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.787615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.787913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.787925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.788224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.788236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.788532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.788544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.788844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.788856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.789106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.789118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.789456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.789479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.790002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.790023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.790315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.790331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.790609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.790626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.790897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.790914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.791224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.791241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.791495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.791509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.791708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.791723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.792026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.792038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.792209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.792222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.792521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.792533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.792780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.792792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.793112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.793124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.793391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.793403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.793700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.793712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.794017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.794029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.794279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.794290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.794617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.794629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.794940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.794952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.795130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.795142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.795324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.795336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.795582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.795594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.795892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.795904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.796203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.049 [2024-07-25 14:04:32.796215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.049 qpair failed and we were unable to recover it. 00:36:36.049 [2024-07-25 14:04:32.796461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.796473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.796723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.796735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.796984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.796996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.797242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.797254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.797432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.797444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.797686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.797698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.798028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.798040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.798282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.798294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.798537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.798550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.798842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.798854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.799151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.799164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.799486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.799498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.799746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.799759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.800028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.800040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.800392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.800404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.800649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.800662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.800986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.801000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.801318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.801330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.801571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.801583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.801917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.801930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.802222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.802234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.802556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.802569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.802816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.802828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.803060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.803073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.803392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.803404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.803668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.803680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.803916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.803929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.804248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.804261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.804574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.804586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.804859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.804871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.805061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.805073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.805381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.805393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.805662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.805675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.805918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.805931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.806175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.806188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.806493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.806505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.806605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.050 [2024-07-25 14:04:32.806617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.050 qpair failed and we were unable to recover it. 00:36:36.050 [2024-07-25 14:04:32.806863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.806876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.807138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.807150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.807446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.807458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.807704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.807725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.807967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.807981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.808214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.808225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.808491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.808503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.808826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.808838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.808998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.809011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.809237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.809249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.809353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.809365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.809661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.809673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.809910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.809923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.810229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.810242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.810543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.810555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.810731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.810744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.811065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.811077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.811397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.811409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.811680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.811692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.811853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.811867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.812049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.812061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.812267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.812279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.812539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.812578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.812838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.812879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.813168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.813208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.813621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.813661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.814036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.814077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.814452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.814492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.814817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.814858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.815171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.815210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.815458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.815498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.815880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.815921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.816304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.816344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.816732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.816745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.816995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.817008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.817310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.817349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.817658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.051 [2024-07-25 14:04:32.817699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.051 qpair failed and we were unable to recover it. 00:36:36.051 [2024-07-25 14:04:32.818015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.818056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.818418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.818459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.818737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.818750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.818928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.818940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.819238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.819250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.819559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.819571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.819874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.819886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.820163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.820175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.820448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.820487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.820794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.820835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.821145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.821186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.821446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.821485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.821735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.821788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.822132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.822173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.822480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.822519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.822814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.822854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.823105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.823144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.823462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.823502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.823759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.823800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.824131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.824171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.824505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.824557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.824789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.824802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.825069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.825083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.825334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.825346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.825524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.825536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.825868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.825909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.826239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.826280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.826572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.826612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.826857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.826898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.827284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.827324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.827698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.827750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.828115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.828155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.828491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.828504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.828660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.828672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.828920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.828961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.829295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.829334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.052 [2024-07-25 14:04:32.829586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.052 [2024-07-25 14:04:32.829626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.052 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.829950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.829992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.830394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.830434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.830764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.830805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.831107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.831147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.831508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.831548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.831842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.831883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.832218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.832258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.832508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.832548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.832909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.832950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.833111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.833152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.833511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.833550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.833910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.833950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.834264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.834304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.834691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.834742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.835069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.835108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.835337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.835349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.835526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.835538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.835860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.835900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.836211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.836251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.836511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.836523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.836726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.836767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.836989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.837029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.837269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.837316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.837521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.837533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.837781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.837822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.838117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.838163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.838415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.838427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.838679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.838691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.838946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.838959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.839141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.839153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.839337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.839377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.053 qpair failed and we were unable to recover it. 00:36:36.053 [2024-07-25 14:04:32.839697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.053 [2024-07-25 14:04:32.839746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.839987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.840026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.840263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.840304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.840676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.840688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.840987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.841000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.841242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.841254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.841509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.841548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.841863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.841905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.842236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.842276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.842600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.842639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.843033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.843074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.843383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.843422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.843724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.843736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.843986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.843999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.844244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.844257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.844505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.844517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.844713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.844729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.844910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.844923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.845177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.845216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.845532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.845572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.845776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.845788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.845969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.845981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.846213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.846253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.846552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.846591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.846891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.846904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.847210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.847250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.847564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.847604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.847890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.847931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.848239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.848280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.848584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.848624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.848916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.848957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.849341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.849381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.849737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.849778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.850195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.850235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.850615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.850629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.850934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.850975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.851231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.851271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.054 qpair failed and we were unable to recover it. 00:36:36.054 [2024-07-25 14:04:32.851630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.054 [2024-07-25 14:04:32.851669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.851943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.851985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.852315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.852354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.852676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.852723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.852948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.852989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.853292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.853331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.853645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.853685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.854112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.854154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.854454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.854493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.854816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.854830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.855071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.855083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.855424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.855464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.855785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.855827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.856209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.856249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.856494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.856534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.856734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.856762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.856971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.856984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.857212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.857225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.857487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.857540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.857854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.857895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.858145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.858184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.858452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.858492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.858820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.858833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.859152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.859164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.859423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.859463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.859772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.859813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.860036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.860076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.860438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.860478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.860787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.860799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.860895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.860907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.861142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.861181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.861469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.861509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.861728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.861756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.861948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.861989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.862317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.862357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.862712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.862766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.862994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.863034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.863394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.055 [2024-07-25 14:04:32.863439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.055 qpair failed and we were unable to recover it. 00:36:36.055 [2024-07-25 14:04:32.863691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.863754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.864119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.864159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.864513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.864552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.864821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.864862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.865247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.865287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.865529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.865568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.865931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.865972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.866267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.866307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.866608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.866648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.866983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.867024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.867275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.867315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.867649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.867689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.868064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.868105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.869480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.869504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.869794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.869832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.870949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.870981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.871349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.871394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.872891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.872920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.873120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.873138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.873408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.873425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.874586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.874617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.874942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.874962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.875302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.875345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.875587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.875628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.875935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.875977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.876204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.876244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.876493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.876542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.876921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.876938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.877178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.877194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.877380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.877397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.877637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.877654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.877894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.877912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.878165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.878182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.878373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.878390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.878582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.878599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.878796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.878811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.878994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.879007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.056 qpair failed and we were unable to recover it. 00:36:36.056 [2024-07-25 14:04:32.879234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.056 [2024-07-25 14:04:32.879246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.879416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.879428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.879711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.879729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.879960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.879973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.880144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.880157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.880410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.880424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.880768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.880782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.880952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.880964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.881218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.881230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.881477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.881489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.881665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.881678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.881956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.881969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.882239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.882252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.882487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.882499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.882663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.882675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.882871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.882884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.883044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.883059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.883264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.883276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.883526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.883538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.883793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.883806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.884056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.884068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.884244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.884257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.884502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.884515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.884675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.884687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.884918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.884930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.885163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.885175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.885421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.885433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.885626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.885639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.885923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.885936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.886168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.886181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.886347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.886360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.886535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.057 [2024-07-25 14:04:32.886547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.057 qpair failed and we were unable to recover it. 00:36:36.057 [2024-07-25 14:04:32.886738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.886750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.887050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.887063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.887310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.887323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.887524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.887536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.887700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.887712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.887959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.887972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.888240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.888252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.888509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.888522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.888843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.888855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.889180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.889195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.889374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.889387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.889570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.889582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.889860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.889873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.890139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.890152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.890326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.890339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.890606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.890618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.890910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.890923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.891225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.891237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.891504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.891516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.891689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.891701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.891940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.891952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.892245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.892258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.892448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.892460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.892575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.892587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.892895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.892909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.893206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.893218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.893537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.893549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.893781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.893793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.894024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.894036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.894207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.894219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.894399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.894411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.894582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.894594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.894825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.894837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.895017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.895029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.895201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.895213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.895443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.895455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.895761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.058 [2024-07-25 14:04:32.895773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.058 qpair failed and we were unable to recover it. 00:36:36.058 [2024-07-25 14:04:32.896011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.059 [2024-07-25 14:04:32.896023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.059 qpair failed and we were unable to recover it. 00:36:36.059 [2024-07-25 14:04:32.896344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.059 [2024-07-25 14:04:32.896356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.059 qpair failed and we were unable to recover it. 00:36:36.059 [2024-07-25 14:04:32.896594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.059 [2024-07-25 14:04:32.896607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.059 qpair failed and we were unable to recover it. 00:36:36.059 [2024-07-25 14:04:32.896794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.059 [2024-07-25 14:04:32.896807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.059 qpair failed and we were unable to recover it. 00:36:36.059 [2024-07-25 14:04:32.897075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.059 [2024-07-25 14:04:32.897087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.059 qpair failed and we were unable to recover it. 00:36:36.059 [2024-07-25 14:04:32.897264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.059 [2024-07-25 14:04:32.897276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.059 qpair failed and we were unable to recover it. 00:36:36.059 [2024-07-25 14:04:32.897534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.059 [2024-07-25 14:04:32.897546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.059 qpair failed and we were unable to recover it. 00:36:36.059 [2024-07-25 14:04:32.897783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.059 [2024-07-25 14:04:32.897796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.059 qpair failed and we were unable to recover it. 00:36:36.059 [2024-07-25 14:04:32.898030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.059 [2024-07-25 14:04:32.898042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.059 qpair failed and we were unable to recover it. 00:36:36.059 [2024-07-25 14:04:32.898288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.059 [2024-07-25 14:04:32.898300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.059 qpair failed and we were unable to recover it. 00:36:36.059 [2024-07-25 14:04:32.898504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.059 [2024-07-25 14:04:32.898516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.059 qpair failed and we were unable to recover it. 00:36:36.059 [2024-07-25 14:04:32.898838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.059 [2024-07-25 14:04:32.898851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.059 qpair failed and we were unable to recover it. 00:36:36.059 [2024-07-25 14:04:32.899100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.059 [2024-07-25 14:04:32.899113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.059 qpair failed and we were unable to recover it. 00:36:36.059 [2024-07-25 14:04:32.899277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.059 [2024-07-25 14:04:32.899290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.059 qpair failed and we were unable to recover it. 00:36:36.059 [2024-07-25 14:04:32.899521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.059 [2024-07-25 14:04:32.899533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.059 qpair failed and we were unable to recover it. 00:36:36.059 [2024-07-25 14:04:32.899854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.059 [2024-07-25 14:04:32.899866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.059 qpair failed and we were unable to recover it. 00:36:36.059 [2024-07-25 14:04:32.900060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.059 [2024-07-25 14:04:32.900072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.059 qpair failed and we were unable to recover it. 00:36:36.059 [2024-07-25 14:04:32.900340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.059 [2024-07-25 14:04:32.900353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.059 qpair failed and we were unable to recover it. 00:36:36.059 [2024-07-25 14:04:32.900589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.059 [2024-07-25 14:04:32.900601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.059 qpair failed and we were unable to recover it. 00:36:36.059 [2024-07-25 14:04:32.900845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.059 [2024-07-25 14:04:32.900858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.059 qpair failed and we were unable to recover it. 00:36:36.059 [2024-07-25 14:04:32.901103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.059 [2024-07-25 14:04:32.901115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.059 qpair failed and we were unable to recover it. 00:36:36.333 [2024-07-25 14:04:32.901345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.333 [2024-07-25 14:04:32.901358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.333 qpair failed and we were unable to recover it. 00:36:36.333 [2024-07-25 14:04:32.901523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.333 [2024-07-25 14:04:32.901537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.333 qpair failed and we were unable to recover it. 00:36:36.333 [2024-07-25 14:04:32.901781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.333 [2024-07-25 14:04:32.901793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.333 qpair failed and we were unable to recover it. 00:36:36.333 [2024-07-25 14:04:32.902093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.333 [2024-07-25 14:04:32.902105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.333 qpair failed and we were unable to recover it. 00:36:36.333 [2024-07-25 14:04:32.902360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.333 [2024-07-25 14:04:32.902372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.333 qpair failed and we were unable to recover it. 00:36:36.333 [2024-07-25 14:04:32.902603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.333 [2024-07-25 14:04:32.902615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.333 qpair failed and we were unable to recover it. 00:36:36.333 [2024-07-25 14:04:32.902807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.333 [2024-07-25 14:04:32.902821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.333 qpair failed and we were unable to recover it. 00:36:36.333 [2024-07-25 14:04:32.903088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.333 [2024-07-25 14:04:32.903100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.333 qpair failed and we were unable to recover it. 00:36:36.333 [2024-07-25 14:04:32.903345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.333 [2024-07-25 14:04:32.903357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.333 qpair failed and we were unable to recover it. 00:36:36.333 [2024-07-25 14:04:32.903541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.333 [2024-07-25 14:04:32.903553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.333 qpair failed and we were unable to recover it. 00:36:36.333 [2024-07-25 14:04:32.903824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.333 [2024-07-25 14:04:32.903836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.333 qpair failed and we were unable to recover it. 00:36:36.333 [2024-07-25 14:04:32.904112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.333 [2024-07-25 14:04:32.904125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.333 qpair failed and we were unable to recover it. 00:36:36.333 [2024-07-25 14:04:32.904420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.333 [2024-07-25 14:04:32.904433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.333 qpair failed and we were unable to recover it. 00:36:36.333 [2024-07-25 14:04:32.904612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.333 [2024-07-25 14:04:32.904624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.333 qpair failed and we were unable to recover it. 00:36:36.333 [2024-07-25 14:04:32.904866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.333 [2024-07-25 14:04:32.904878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.333 qpair failed and we were unable to recover it. 00:36:36.333 [2024-07-25 14:04:32.905049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.333 [2024-07-25 14:04:32.905061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.333 qpair failed and we were unable to recover it. 00:36:36.333 [2024-07-25 14:04:32.905315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.333 [2024-07-25 14:04:32.905328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.333 qpair failed and we were unable to recover it. 00:36:36.333 [2024-07-25 14:04:32.905626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.333 [2024-07-25 14:04:32.905638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.333 qpair failed and we were unable to recover it. 00:36:36.333 [2024-07-25 14:04:32.905869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.333 [2024-07-25 14:04:32.905882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.333 qpair failed and we were unable to recover it. 00:36:36.333 [2024-07-25 14:04:32.906113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.333 [2024-07-25 14:04:32.906125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.333 qpair failed and we were unable to recover it. 00:36:36.333 [2024-07-25 14:04:32.906310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.333 [2024-07-25 14:04:32.906322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.333 qpair failed and we were unable to recover it. 00:36:36.333 [2024-07-25 14:04:32.906567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.333 [2024-07-25 14:04:32.906580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.333 qpair failed and we were unable to recover it. 00:36:36.333 [2024-07-25 14:04:32.906756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.906768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.907021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.907034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.907357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.907369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.907597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.907610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.907931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.907944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.908255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.908267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.908518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.908531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.908836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.908848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.909025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.909037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.909297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.909309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.909627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.909639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.909988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.910001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.910324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.910336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.910590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.910602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.910835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.910848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.910959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.910972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.911229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.911241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.911488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.911500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.911743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.911756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.912022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.912034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.912371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.912384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.912563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.912575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.912821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.912834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.913026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.913038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.913202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.913215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.913372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.913384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.913548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.913560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.913875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.913888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.914059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.914072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.914300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.914312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.914478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.914490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.914788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.914800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.915049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.915062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.915356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.915368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.915599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.915611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.915876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.915888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.334 [2024-07-25 14:04:32.916140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.334 [2024-07-25 14:04:32.916152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.334 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.916404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.916416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.916725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.916738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.916923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.916935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.917166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.917179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.917473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.917486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.917763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.917776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.918007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.918019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.918285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.918297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.918546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.918558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.918748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.918761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.918936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.918948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.919268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.919280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.919469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.919482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.919712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.919734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.919917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.919929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.920088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.920100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.920395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.920407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.920582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.920594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.920838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.920850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.921099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.921112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.921342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.921354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.921580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.921592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.921840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.921853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.922096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.922108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.922282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.922294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.922614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.922626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.922883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.922895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.923127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.923141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.923368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.923380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.923615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.923627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.923936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.923949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.924258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.924271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.924594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.924606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.924928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.924940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.925306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.925319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.925615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.925628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.925969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.335 [2024-07-25 14:04:32.925981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.335 qpair failed and we were unable to recover it. 00:36:36.335 [2024-07-25 14:04:32.926212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.926224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.926525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.926537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.926857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.926869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.927099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.927111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.927433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.927445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.927696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.927708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.927907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.927920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.928110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.928123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.928424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.928436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.928780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.928793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.929142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.929155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.929428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.929440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.929614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.929626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.929949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.929962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.930306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.930318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.930661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.930673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.930969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.930982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.931228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.931240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.931543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.931555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.931852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.931865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.932107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.932120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.932418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.932430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.932608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.932620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.932888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.932901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.933135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.933147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.933402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.933414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.933736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.933749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.933933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.933945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.934201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.934214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.934442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.934455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.934754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.934768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.934994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.935006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.935303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.935315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.935563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.935575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.935816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.935829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.936127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.936139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.936320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.936333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.336 qpair failed and we were unable to recover it. 00:36:36.336 [2024-07-25 14:04:32.936576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.336 [2024-07-25 14:04:32.936588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.936871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.936884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.937172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.937184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.937415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.937428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.937608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.937620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.937883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.937896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.938146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.938158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.938482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.938495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.938795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.938807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.938993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.939006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.939258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.939270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.939544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.939556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.939787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.939799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.940099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.940111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.940271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.940283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.940512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.940524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.940849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.940861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.941042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.941055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.941162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.941174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.941501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.941513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.941687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.941700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.941866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.941879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.942056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.942068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.942324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.942336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.942581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.942594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.942758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.942771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.942932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.942944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.943186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.943198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.943449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.943461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.943692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.943705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.943978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.943990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.944088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.944100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.944294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.944307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.944495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.944509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.944753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.944766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.337 [2024-07-25 14:04:32.945088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.337 [2024-07-25 14:04:32.945100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.337 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.945345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.945357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.945536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.945548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.945799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.945812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.946066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.946079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.946194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.946206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.946436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.946448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.946675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.946688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.946800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.946813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.946975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.946987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.947229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.947242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.947350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.947362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.947680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.947692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.947877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.947889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.948120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.948133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.948459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.948471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.948702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.948723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.948986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.948999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.949226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.949238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.949420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.949432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.949610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.949623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.949873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.949885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.950183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.950195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.950457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.950470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.950780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.950792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.951063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.951076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.951326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.951338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.951681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.951694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.951945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.951957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.952206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.952218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.952386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.952399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.952720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.952733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.953050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.338 [2024-07-25 14:04:32.953062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.338 qpair failed and we were unable to recover it. 00:36:36.338 [2024-07-25 14:04:32.953326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.953339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.953587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.953600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.953897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.953909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.954234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.954246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.954557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.954569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.954868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.954882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.955129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.955142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.955418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.955431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.955662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.955674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.955993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.956005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.956261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.956273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.956544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.956556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.956803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.956815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.957064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.957076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.957373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.957385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.957628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.957640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.957905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.957918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.958188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.958200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.958377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.958389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.958701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.958717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.958906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.958918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.959217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.959230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.959524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.959537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.959768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.959780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.960101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.960114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.960459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.960471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.960704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.960720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.960904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.960916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.961236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.961249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.961351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.961364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.961688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.961700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.962006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.962018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.962269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.962281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.962542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.962554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.962784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.962796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.962980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.962992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.963230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.339 [2024-07-25 14:04:32.963242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.339 qpair failed and we were unable to recover it. 00:36:36.339 [2024-07-25 14:04:32.963472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.963485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.963719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.963732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.964045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.964057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.964375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.964388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.964713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.964735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.964927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.964939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.965189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.965201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.965499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.965511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.965854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.965868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.966100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.966113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.966303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.966315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.966613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.966625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.966816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.966829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.967093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.967105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.967354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.967366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.967594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.967606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.967922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.967935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.968112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.968125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.968445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.968457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.968640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.968652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.968950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.968963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.969138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.969151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.969474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.969486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.969732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.969745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.969989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.970002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.970243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.970256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.970491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.970504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.970773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.970785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.971015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.971028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.971270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.971282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.971524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.971537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.971858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.971871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.972168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.972180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.972412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.972424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.972743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.972756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.973002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.973014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.973328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.973341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.340 qpair failed and we were unable to recover it. 00:36:36.340 [2024-07-25 14:04:32.973587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.340 [2024-07-25 14:04:32.973600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.973849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.973862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.974130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.974143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.974416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.974428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.974615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.974627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.974881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.974893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.975212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.975225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.975555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.975567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.975813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.975825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.976070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.976082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.976383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.976395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.976728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.976743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.976932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.976945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.977291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.977304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.977619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.977631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.977928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.977941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.978280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.978293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.978531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.978544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.978787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.978799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.979049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.979061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.979304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.979316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.979616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.979628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.979961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.979974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.980209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.980221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.980543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.980555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.980875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.980888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.981067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.981079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.981377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.981389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.981633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.981646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.981822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.981835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.982080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.982092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.982341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.982354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.982652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.982664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.982934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.982947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.983189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.983201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.983374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.983386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.983636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.341 [2024-07-25 14:04:32.983648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.341 qpair failed and we were unable to recover it. 00:36:36.341 [2024-07-25 14:04:32.983809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.983822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.984050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.984062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.984295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.984308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.984626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.984638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.984869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.984882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.985071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.985084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.985380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.985392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.985656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.985668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.985908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.985921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.986151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.986163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.986322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.986334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.986677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.986689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.986920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.986932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.987031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.987043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.987300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.987314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.987632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.987645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.987968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.987980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.988228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.988241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.988482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.988494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.988817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.988830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.989085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.989098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.989268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.989281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.989603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.989615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.989950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.989962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.990308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.990320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.990575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.990587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.990912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.990925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.991189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.991201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.991502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.991514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.991633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.991645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.991897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.991910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.992135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.992147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.342 qpair failed and we were unable to recover it. 00:36:36.342 [2024-07-25 14:04:32.992475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.342 [2024-07-25 14:04:32.992488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.992672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.992684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.992985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.992998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.993232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.993244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.993551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.993564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.993808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.993821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.994005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.994018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.994287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.994299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.994482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.994494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.994745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.994758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.995026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.995038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.995356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.995369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.995639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.995651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.995842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.995855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.996177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.996189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.996432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.996444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.996710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.996733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.997028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.997040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.997236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.997248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.997503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.997515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.997712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.997728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.998028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.998040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.998371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.998385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.998586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.998598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.998854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.998867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.999109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.999122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.999299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.999311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.999613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.999625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:32.999942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:32.999955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:33.000272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:33.000285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:33.000582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:33.000594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:33.000854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:33.000866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:33.001181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:33.001193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:33.001444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:33.001457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:33.001634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:33.001646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:33.001911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:33.001924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:33.002165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.343 [2024-07-25 14:04:33.002177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.343 qpair failed and we were unable to recover it. 00:36:36.343 [2024-07-25 14:04:33.002473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.002485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.002721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.002734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.003002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.003015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.003253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.003265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.003491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.003504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.003752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.003764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.003944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.003956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.004206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.004219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.004458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.004498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.004783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.004796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.005087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.005126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.005432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.005473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.005841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.005883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.006177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.006217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.006578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.006618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.006937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.006950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.007177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.007189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.007440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.007453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.007813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.007854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.008173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.008213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.008527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.008567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.008875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.008916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.009320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.009360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.009625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.009664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.010036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.010077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.010389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.010429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.010766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.010807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.011140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.011180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.011483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.011523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.011905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.011947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.012158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.012170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.012360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.012400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.012625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.012637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.012869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.012881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.013081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.013094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.013201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.013241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.013557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.013597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.344 [2024-07-25 14:04:33.013934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.344 [2024-07-25 14:04:33.013975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.344 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.014128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.014167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.014412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.014452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.014760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.014801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.015087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.015099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.015338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.015378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.015641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.015681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.016102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.016142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.016529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.016569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.016877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.016919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.017250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.017290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.017595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.017635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.018013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.018054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.018394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.018434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.018762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.018803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.019108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.019153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.019544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.019584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.019916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.019957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.020321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.020361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.020616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.020655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.021093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.021135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.021364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.021405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.021642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.021682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.022013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.022054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.022416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.022457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.022685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.022735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.023049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.023100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.023365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.023406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.023699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.023752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.024122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.024162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.024474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.024514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.024819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.024831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.025029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.025069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.025303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.025343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.025723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.025764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.026094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.026135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.026462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.026502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.026682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.345 [2024-07-25 14:04:33.026746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.345 qpair failed and we were unable to recover it. 00:36:36.345 [2024-07-25 14:04:33.027113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.027153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.027387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.027427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.027652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.027664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.027857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.027870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.028046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.028059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.028303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.028325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.028665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.028705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.028953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.028993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.029288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.029327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.029634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.029674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.030108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.030149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.030397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.030437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.030738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.030779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.031043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.031054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.031358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.031399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.031704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.031752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.032050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.032090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.032482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.032527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.032834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.032882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.033177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.033217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.033467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.033507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.033805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.033846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.034156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.034196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.034354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.034394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.034779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.034820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.035065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.035105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.035404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.035444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.035849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.035890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.036177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.036225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.036639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.036679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.036979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.036992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.037234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.037247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.037499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.037511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.037760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.037772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.037986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.037998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.038229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.038241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.038503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.346 [2024-07-25 14:04:33.038515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.346 qpair failed and we were unable to recover it. 00:36:36.346 [2024-07-25 14:04:33.038767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.038780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.039076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.039088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.039415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.039455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.039859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.039900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.040260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.040301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.040652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.040691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.041029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.041069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.041333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.041373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.041667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.041707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.042077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.042117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.042363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.042403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.042643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.042682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.043013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.043053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.043284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.043297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.043590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.043630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.044006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.044047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.044277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.044317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.044693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.044740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.045050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.045090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.045451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.045492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.045786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.045832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.046072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.046113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.046420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.046460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.046730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.046772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.047159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.047200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.047492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.047532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.047898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.047939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.048252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.048293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.048517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.048557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.048824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.048837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.049165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.049205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.049590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.049630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.347 qpair failed and we were unable to recover it. 00:36:36.347 [2024-07-25 14:04:33.049872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.347 [2024-07-25 14:04:33.049885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.050229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.050242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.050541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.050553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.050727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.050740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.051071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.051112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.051414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.051454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.051764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.051805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.052042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.052083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.052328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.052368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.052723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.052765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.053148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.053188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.053409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.053449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.053748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.053789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.054032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.054072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.054454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.054494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.054806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.054848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.055032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.055072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.055295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.055307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.055651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.055691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.056014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.056056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.056383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.056422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.056752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.056793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.057111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.057151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.057538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.057578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.057973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.058014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.058238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.058251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.058478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.058490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.058738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.058751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.058942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.058956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.059255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.059306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.059536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.059575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.059892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.059933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.060296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.060336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.060567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.060607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.060767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.060808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.061171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.061212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.061616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.061657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.061978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.348 [2024-07-25 14:04:33.062019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.348 qpair failed and we were unable to recover it. 00:36:36.348 [2024-07-25 14:04:33.062309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.062322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.062570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.062583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.062893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.062934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.063302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.063342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.063668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.063708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.063982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.063994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.064177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.064218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.064549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.064589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.064814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.064855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.065051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.065063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.065243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.065255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.065511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.065551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.065859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.065899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.066104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.066116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.066373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.066401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.066733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.066773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.067143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.067183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.067420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.067461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.067789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.067829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.068145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.068185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.068514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.068554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.068868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.068910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.069297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.069337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.069640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.069680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.069924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.069969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.070225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.070237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.070569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.070609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.070856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.070868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.071061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.071073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.071253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.071266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.071528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.071573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.071824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.071865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.072095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.072135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.072440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.072480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.072785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.072826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.073148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.073188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.073500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.349 [2024-07-25 14:04:33.073539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.349 qpair failed and we were unable to recover it. 00:36:36.349 [2024-07-25 14:04:33.073766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.073807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.074112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.074151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.074408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.074420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.074646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.074659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.074886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.074899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.075132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.075145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.075341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.075353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.075599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.075611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.075842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.075854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.076024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.076037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.076209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.076221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.076460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.076500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.076752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.076793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.077027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.077039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.077296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.077309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.077495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.077507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.077837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.077878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.078171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.078217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.078402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.078414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.078583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.078595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.078835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.078876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.079199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.079239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.079548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.079588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.079823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.079835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.080104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.080116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.080346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.080386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.080619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.080659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.080952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.080965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.081216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.081243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.081488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.081528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.081827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.081840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.082102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.082142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.082478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.082517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.082746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.082793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.083082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.083121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.083410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.083450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.083755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.350 [2024-07-25 14:04:33.083796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.350 qpair failed and we were unable to recover it. 00:36:36.350 [2024-07-25 14:04:33.084181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.084221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.084511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.084551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.084781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.084821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.084942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.084954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.085201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.085240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.085534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.085575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.085808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.085849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.086112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.086152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.086359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.086371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.086596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.086609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.086795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.086808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.087037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.087049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.087354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.087395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.087627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.087667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.088111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.088152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.088513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.088553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.088802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.088842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.089151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.089190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.089569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.089608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.089908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.089920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.090196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.090235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.090617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.090656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.091035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.091076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.091343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.091384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.091699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.091751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.091987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.092027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.092263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.092304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.092595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.092635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.092994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.093007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.093249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.093288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.093584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.093624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.093949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.093962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.094261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.094273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.094519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.094559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.094866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.094907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.095230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.095269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.095561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.095606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.351 qpair failed and we were unable to recover it. 00:36:36.351 [2024-07-25 14:04:33.095914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.351 [2024-07-25 14:04:33.095955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.096341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.096382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.096706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.096753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.097052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.097092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.097461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.097500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.097811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.097852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.098237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.098277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.098602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.098643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.099011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.099052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.099434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.099473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.099707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.099755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.100065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.100106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.100400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.100439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.100755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.100797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.101107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.101148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.101444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.101484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.101859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.101900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.102270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.102310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.102664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.102704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.103028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.103068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.103451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.103492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.103819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.103860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.104272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.104312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.104604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.104644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.104947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.104959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.105148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.105188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.105257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x605b30 (9): Bad file descriptor 00:36:36.352 [2024-07-25 14:04:33.105734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.105816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.106210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.106255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.106524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.106565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.106824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.106867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.107073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.107090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.352 [2024-07-25 14:04:33.107288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.352 [2024-07-25 14:04:33.107305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.352 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.107483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.107500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.107692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.107742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.108138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.108179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.108481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.108522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.108834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.108875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.109212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.109251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.109417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.109458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.109736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.109778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.110108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.110148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.110385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.110402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.110655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.110692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.111004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.111046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.111377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.111417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.111726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.111768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.112137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.112178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.112422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.112461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.112776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.112817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.113182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.113223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.113517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.113557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.113787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.113828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.114218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.114258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.114674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.114727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.115038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.115078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.115391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.115431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.115748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.115791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.115960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.115976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.116257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.116297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.116605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.116645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.116966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.117007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.117259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.117300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.117602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.117642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.117964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.118006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.118247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.118264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.118506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.118523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.118781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.118802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.119061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.119101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.119489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.119529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.353 [2024-07-25 14:04:33.119839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.353 [2024-07-25 14:04:33.119879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.353 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.120116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.120134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.120475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.120515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.120878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.120920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.121249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.121266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.121511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.121528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.121832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.121873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.122122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.122162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.122524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.122565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.122937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.122980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.123314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.123353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.123753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.123794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.124037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.124078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.124402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.124443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.124694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.124760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.125815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.125844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.126102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.126120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.126321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.126362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.126780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.126822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.127130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.127171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.127527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.127568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.127878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.127920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.128309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.128349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.128602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.128642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.128962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.129004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.129393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.129434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.129663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.129703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.130016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.130057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.130386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.130403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.130579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.130596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.130854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.130872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.131063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.131080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.131285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.131302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.131488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.131505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.131814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.131832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.132139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.132156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.132357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.132374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.132563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.354 [2024-07-25 14:04:33.132580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.354 qpair failed and we were unable to recover it. 00:36:36.354 [2024-07-25 14:04:33.132766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.132786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.132982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.132999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.133200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.133216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.133459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.133476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.133723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.133740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.133921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.133938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.134128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.134144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.134453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.134469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.134649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.134666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.134976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.134993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.135165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.135182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.135434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.135451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.135649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.135666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.135784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.135801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.136136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.136153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.136351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.136368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.136567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.136584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.136836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.136853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.137120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.137137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.137317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.137334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.137511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.137527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.137713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.137736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.138042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.138059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.138319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.138336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.138574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.138590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.138897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.138915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.139162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.139180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.139425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.139444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.139684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.139700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.139958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.139975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.140234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.140251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.140486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.140503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.140764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.140782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.141005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.141022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.141220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.141236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.141411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.141428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.141736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.141753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.142013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.355 [2024-07-25 14:04:33.142029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.355 qpair failed and we were unable to recover it. 00:36:36.355 [2024-07-25 14:04:33.142269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.142286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.142568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.142585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.142826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.142844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.143106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.143123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.143407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.143424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.143599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.143616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.143868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.143885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.144144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.144161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.144315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.144332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.144666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.144683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.145004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.145022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.145332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.145349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.145568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.145585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.145763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.145780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.146045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.146062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.146300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.146317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.146489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.146505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.146759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.146776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.147128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.147145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.147386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.147403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.147737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.147754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.147922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.147939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.148132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.148149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.148408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.148425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.148665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.148681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.149017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.149034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.149227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.149243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.149497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.149513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.149790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.149808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.150084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.150101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.150299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.150318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.150531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.150548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.356 [2024-07-25 14:04:33.150879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.356 [2024-07-25 14:04:33.150896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.356 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.151135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.151152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.151460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.151476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.151807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.151824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.151993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.152010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.152217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.152233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.152566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.152583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.152836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.152853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.153187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.153203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.153533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.153549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.153804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.153821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.154032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.154049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.154217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.154234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.154487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.154504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.154704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.154726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.154927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.154944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.155225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.155242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.155448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.155465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.155659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.155676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.155850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.155867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.156122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.156139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.156312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.156328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.156568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.156584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.156842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.156859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.157112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.157129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.157318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.157338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.157532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.157549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.157729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.157746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.158007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.158024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.158355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.158372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.158702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.158726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.158999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.159016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.159200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.159217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.159466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.159483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.159608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.159624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.159953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.159970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.160148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.160165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.160472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.160489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.357 qpair failed and we were unable to recover it. 00:36:36.357 [2024-07-25 14:04:33.160734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.357 [2024-07-25 14:04:33.160751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.161050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.161086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.161402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.161438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.161709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.161743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.161942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.161956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.162214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.162226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.162473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.162486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.162727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.162739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.162989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.163001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.163299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.163312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.163475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.163487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.163822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.163835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.164010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.164023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.164321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.164333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.164561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.164577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.164843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.164857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.165085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.165097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.165371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.165383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.165630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.165642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.165900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.165913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.166233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.166245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.166349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.166361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.166656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.166669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.166979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.166991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.167172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.167184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.167510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.167522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.167820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.167832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.168018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.168030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.168283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.168295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.168550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.168562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.168883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.168895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.169230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.169243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.169484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.169496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.169729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.169741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.170058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.170070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.170368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.170380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.170646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.170658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.170955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.358 [2024-07-25 14:04:33.170968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.358 qpair failed and we were unable to recover it. 00:36:36.358 [2024-07-25 14:04:33.171215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.171227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.171478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.171490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.171807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.171819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.172143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.172155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.172475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.172488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.172827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.172840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.173136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.173148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.173398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.173410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.173707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.173723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.173966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.173978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.174247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.174259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.174520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.174532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.174798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.174811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.175057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.175069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.175390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.175403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.175599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.175611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.175803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.175817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.176068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.176080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.176402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.176414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.176679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.176691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.177032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.177073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.177458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.177502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.177824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.177837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.178088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.178128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.178480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.178520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.178825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.178865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.179175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.179215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.179595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.179634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.179937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.179978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.180367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.180407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.180727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.180769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.181078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.181118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.181481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.181521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.181879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.181920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.182240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.182279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.182634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.182646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.182824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.182837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.359 qpair failed and we were unable to recover it. 00:36:36.359 [2024-07-25 14:04:33.183110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.359 [2024-07-25 14:04:33.183123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.183323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.183335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.183516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.183528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.183735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.183775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.184171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.184211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.184589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.184601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.184923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.184936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.185150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.185189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.185499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.185538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.185847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.185888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.186273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.186313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.186625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.186637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.186884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.186896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.187241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.187281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.187588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.187627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.187938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.187978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.188276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.188316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.188554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.188594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.188977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.189016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.189390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.189403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.189724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.189736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.189936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.189948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.190196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.190209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.190453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.190465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.190735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.190762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.191004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.191017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.191206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.191218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.191469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.191481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.191670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.191682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.191865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.191877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.192141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.192154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.192334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.192375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.192617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.192657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.192916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.192957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.193342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.193382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.193601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.193614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.193950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.193991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.360 [2024-07-25 14:04:33.194303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.360 [2024-07-25 14:04:33.194344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.360 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.194593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.194633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.194930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.194971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.195280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.195320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.195704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.195752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.196074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.196114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.196424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.196437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.196697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.196759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.197002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.197041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.197416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.197493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.197839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.197887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.198281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.198322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.198634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.198674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.199017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.199058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.199369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.199409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.199780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.199822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.200083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.200123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.200432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.200473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.200738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.200780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.201145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.201185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.201437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.201477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.201672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.201690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.201890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.201908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.202108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.202149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.202416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.202456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.202756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.202774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.203054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.203071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.203268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.203285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.203556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.203573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.203833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.203850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.204120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.204159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.204542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.204582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.204911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.204953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.361 [2024-07-25 14:04:33.205271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.361 [2024-07-25 14:04:33.205288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.361 qpair failed and we were unable to recover it. 00:36:36.637 [2024-07-25 14:04:33.206376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.637 [2024-07-25 14:04:33.206406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.637 qpair failed and we were unable to recover it. 00:36:36.637 [2024-07-25 14:04:33.206758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.637 [2024-07-25 14:04:33.206801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.637 qpair failed and we were unable to recover it. 00:36:36.638 [2024-07-25 14:04:33.207953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.638 [2024-07-25 14:04:33.207983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.638 qpair failed and we were unable to recover it. 00:36:36.638 [2024-07-25 14:04:33.208265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.638 [2024-07-25 14:04:33.208284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.638 qpair failed and we were unable to recover it. 00:36:36.638 [2024-07-25 14:04:33.208571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.638 [2024-07-25 14:04:33.208588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.638 qpair failed and we were unable to recover it. 00:36:36.638 [2024-07-25 14:04:33.208917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.638 [2024-07-25 14:04:33.208936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.638 qpair failed and we were unable to recover it. 00:36:36.638 [2024-07-25 14:04:33.209246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.638 [2024-07-25 14:04:33.209263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.638 qpair failed and we were unable to recover it. 00:36:36.638 [2024-07-25 14:04:33.209537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.638 [2024-07-25 14:04:33.209553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.638 qpair failed and we were unable to recover it. 00:36:36.638 [2024-07-25 14:04:33.209810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.638 [2024-07-25 14:04:33.209827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.638 qpair failed and we were unable to recover it. 00:36:36.638 [2024-07-25 14:04:33.210006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.638 [2024-07-25 14:04:33.210023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.638 qpair failed and we were unable to recover it. 00:36:36.638 [2024-07-25 14:04:33.210332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.638 [2024-07-25 14:04:33.210348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.638 qpair failed and we were unable to recover it. 00:36:36.638 [2024-07-25 14:04:33.210607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.638 [2024-07-25 14:04:33.210624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.638 qpair failed and we were unable to recover it. 00:36:36.638 [2024-07-25 14:04:33.210938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.638 [2024-07-25 14:04:33.210955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.638 qpair failed and we were unable to recover it. 00:36:36.638 [2024-07-25 14:04:33.211265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.638 [2024-07-25 14:04:33.211282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.638 qpair failed and we were unable to recover it. 00:36:36.638 [2024-07-25 14:04:33.211611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.638 [2024-07-25 14:04:33.211628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.638 qpair failed and we were unable to recover it. 00:36:36.638 [2024-07-25 14:04:33.211812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.638 [2024-07-25 14:04:33.211830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.638 qpair failed and we were unable to recover it. 00:36:36.638 [2024-07-25 14:04:33.212167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.638 [2024-07-25 14:04:33.212185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.638 qpair failed and we were unable to recover it. 00:36:36.638 [2024-07-25 14:04:33.212444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.638 [2024-07-25 14:04:33.212461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.638 qpair failed and we were unable to recover it. 00:36:36.638 [2024-07-25 14:04:33.212707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.638 [2024-07-25 14:04:33.212730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.638 qpair failed and we were unable to recover it. 00:36:36.638 [2024-07-25 14:04:33.212919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.638 [2024-07-25 14:04:33.212936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.638 qpair failed and we were unable to recover it. 00:36:36.638 [2024-07-25 14:04:33.213179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.638 [2024-07-25 14:04:33.213196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.638 qpair failed and we were unable to recover it. 00:36:36.638 [2024-07-25 14:04:33.213391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.638 [2024-07-25 14:04:33.213408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.638 qpair failed and we were unable to recover it. 00:36:36.638 [2024-07-25 14:04:33.213671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.638 [2024-07-25 14:04:33.213688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.638 qpair failed and we were unable to recover it. 00:36:36.638 [2024-07-25 14:04:33.214018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.638 [2024-07-25 14:04:33.214035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.638 qpair failed and we were unable to recover it. 00:36:36.638 [2024-07-25 14:04:33.214214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.638 [2024-07-25 14:04:33.214230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.638 qpair failed and we were unable to recover it. 00:36:36.638 [2024-07-25 14:04:33.214487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.638 [2024-07-25 14:04:33.214504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.638 qpair failed and we were unable to recover it. 00:36:36.638 [2024-07-25 14:04:33.214832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.638 [2024-07-25 14:04:33.214850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.638 qpair failed and we were unable to recover it. 00:36:36.638 [2024-07-25 14:04:33.215112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.638 [2024-07-25 14:04:33.215129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.638 qpair failed and we were unable to recover it. 00:36:36.638 [2024-07-25 14:04:33.215392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.215409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.215726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.215743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.215954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.215971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.216251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.216269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.216459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.216476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.216735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.216752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.216998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.217015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.217254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.217271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.217578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.217595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.217867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.217884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.218145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.218162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.218443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.218459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.218648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.218666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.218924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.218942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.219123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.219140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.219496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.219531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.219905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.219933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.220259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.220273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.220600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.220613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.220805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.220818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.221071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.221083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.221278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.221291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.221538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.221551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.221734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.221746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.221994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.222007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.222278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.222290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.222543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.222555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.222803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.222815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.223084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.223098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.223348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.223361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.223590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.223603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.223851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.223892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.639 [2024-07-25 14:04:33.224276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.639 [2024-07-25 14:04:33.224316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.639 qpair failed and we were unable to recover it. 00:36:36.640 [2024-07-25 14:04:33.224628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.640 [2024-07-25 14:04:33.224668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.640 qpair failed and we were unable to recover it. 00:36:36.640 [2024-07-25 14:04:33.224998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.640 [2024-07-25 14:04:33.225040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.640 qpair failed and we were unable to recover it. 00:36:36.640 [2024-07-25 14:04:33.225295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.640 [2024-07-25 14:04:33.225334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.640 qpair failed and we were unable to recover it. 00:36:36.640 [2024-07-25 14:04:33.225643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.640 [2024-07-25 14:04:33.225683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.640 qpair failed and we were unable to recover it. 00:36:36.640 [2024-07-25 14:04:33.225995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.640 [2024-07-25 14:04:33.226036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.640 qpair failed and we were unable to recover it. 00:36:36.640 [2024-07-25 14:04:33.226275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.640 [2024-07-25 14:04:33.226314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.640 qpair failed and we were unable to recover it. 00:36:36.640 [2024-07-25 14:04:33.226619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.640 [2024-07-25 14:04:33.226659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.640 qpair failed and we were unable to recover it. 00:36:36.640 [2024-07-25 14:04:33.226983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.640 [2024-07-25 14:04:33.227024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.640 qpair failed and we were unable to recover it. 00:36:36.640 [2024-07-25 14:04:33.227330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.640 [2024-07-25 14:04:33.227371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.640 qpair failed and we were unable to recover it. 00:36:36.640 [2024-07-25 14:04:33.227740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.640 [2024-07-25 14:04:33.227781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.640 qpair failed and we were unable to recover it. 00:36:36.640 [2024-07-25 14:04:33.228087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.640 [2024-07-25 14:04:33.228127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.640 qpair failed and we were unable to recover it. 00:36:36.640 [2024-07-25 14:04:33.228440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.640 [2024-07-25 14:04:33.228479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.640 qpair failed and we were unable to recover it. 00:36:36.640 [2024-07-25 14:04:33.228863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.640 [2024-07-25 14:04:33.228904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.640 qpair failed and we were unable to recover it. 00:36:36.640 [2024-07-25 14:04:33.229286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.640 [2024-07-25 14:04:33.229325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.640 qpair failed and we were unable to recover it. 00:36:36.640 [2024-07-25 14:04:33.229658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.640 [2024-07-25 14:04:33.229697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.640 qpair failed and we were unable to recover it. 00:36:36.640 [2024-07-25 14:04:33.230025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.640 [2024-07-25 14:04:33.230066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.640 qpair failed and we were unable to recover it. 00:36:36.640 [2024-07-25 14:04:33.230449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.640 [2024-07-25 14:04:33.230489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.640 qpair failed and we were unable to recover it. 00:36:36.640 [2024-07-25 14:04:33.230882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.640 [2024-07-25 14:04:33.230923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.640 qpair failed and we were unable to recover it. 00:36:36.640 [2024-07-25 14:04:33.231173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.640 [2024-07-25 14:04:33.231212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.640 qpair failed and we were unable to recover it. 00:36:36.640 [2024-07-25 14:04:33.231575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.640 [2024-07-25 14:04:33.231615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.640 qpair failed and we were unable to recover it. 00:36:36.640 [2024-07-25 14:04:33.231952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.640 [2024-07-25 14:04:33.231993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.640 qpair failed and we were unable to recover it. 00:36:36.640 [2024-07-25 14:04:33.232218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.640 [2024-07-25 14:04:33.232258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.640 qpair failed and we were unable to recover it. 00:36:36.640 [2024-07-25 14:04:33.232653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.640 [2024-07-25 14:04:33.232699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.640 qpair failed and we were unable to recover it. 00:36:36.640 [2024-07-25 14:04:33.232986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.640 [2024-07-25 14:04:33.233027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.640 qpair failed and we were unable to recover it. 00:36:36.640 [2024-07-25 14:04:33.233318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.640 [2024-07-25 14:04:33.233357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.640 qpair failed and we were unable to recover it. 00:36:36.641 [2024-07-25 14:04:33.233730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.233771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 [2024-07-25 14:04:33.234156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.234169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 515850 Killed "${NVMF_APP[@]}" "$@" 00:36:36.641 [2024-07-25 14:04:33.234352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.234364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 [2024-07-25 14:04:33.234619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.234632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 [2024-07-25 14:04:33.234874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.234886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 [2024-07-25 14:04:33.235111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.235124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 14:04:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:36.641 [2024-07-25 14:04:33.235360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.235373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 14:04:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:36.641 [2024-07-25 14:04:33.235614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.235627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 14:04:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:36.641 [2024-07-25 14:04:33.235872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.235885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 14:04:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:36.641 [2024-07-25 14:04:33.236190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.236203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 14:04:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:36.641 [2024-07-25 14:04:33.236527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.236539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 [2024-07-25 14:04:33.236766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.236779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 [2024-07-25 14:04:33.236956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.236968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 [2024-07-25 14:04:33.237082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.237095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 [2024-07-25 14:04:33.237337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.237350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 [2024-07-25 14:04:33.237576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.237588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 [2024-07-25 14:04:33.237953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.237966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 [2024-07-25 14:04:33.238145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.238158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 [2024-07-25 14:04:33.238387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.238399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 [2024-07-25 14:04:33.238650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.238662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 [2024-07-25 14:04:33.238833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.238845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 [2024-07-25 14:04:33.239145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.239159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 [2024-07-25 14:04:33.239394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.239416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 [2024-07-25 14:04:33.239551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.239563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 [2024-07-25 14:04:33.239824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.239837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 [2024-07-25 14:04:33.240066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.240078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 [2024-07-25 14:04:33.240398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.240410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 [2024-07-25 14:04:33.240655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.240666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 [2024-07-25 14:04:33.240969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.240982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.641 qpair failed and we were unable to recover it. 00:36:36.641 [2024-07-25 14:04:33.241225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.641 [2024-07-25 14:04:33.241238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 [2024-07-25 14:04:33.241404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.241417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 [2024-07-25 14:04:33.241670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.241682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 [2024-07-25 14:04:33.242030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.242042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 [2024-07-25 14:04:33.242221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.242234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 [2024-07-25 14:04:33.242464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.242476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 [2024-07-25 14:04:33.242724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.242737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 [2024-07-25 14:04:33.242910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.242922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 [2024-07-25 14:04:33.243220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.243232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 [2024-07-25 14:04:33.243330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.243342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 [2024-07-25 14:04:33.243587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.243599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 [2024-07-25 14:04:33.243898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.243911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 [2024-07-25 14:04:33.244141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.244153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 [2024-07-25 14:04:33.244400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.244412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 14:04:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=516671 00:36:36.642 [2024-07-25 14:04:33.244710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.244726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 14:04:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 516671 00:36:36.642 [2024-07-25 14:04:33.244985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.244998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 14:04:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:36.642 [2024-07-25 14:04:33.245295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.245309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 14:04:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 516671 ']' 00:36:36.642 [2024-07-25 14:04:33.245538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.245551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 14:04:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:36.642 [2024-07-25 14:04:33.245800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.245812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 14:04:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:36.642 [2024-07-25 14:04:33.246113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.246126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 14:04:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:36.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:36.642 [2024-07-25 14:04:33.246432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.246445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 14:04:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:36.642 [2024-07-25 14:04:33.246743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.246756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 14:04:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:36.642 [2024-07-25 14:04:33.246936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.246949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 [2024-07-25 14:04:33.247271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.247284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 [2024-07-25 14:04:33.247451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.247464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 [2024-07-25 14:04:33.247763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.247776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 [2024-07-25 14:04:33.247867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.247880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 [2024-07-25 14:04:33.247991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.248004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 [2024-07-25 14:04:33.248302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.248315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 [2024-07-25 14:04:33.248484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.642 [2024-07-25 14:04:33.248497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.642 qpair failed and we were unable to recover it. 00:36:36.642 [2024-07-25 14:04:33.248817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.643 [2024-07-25 14:04:33.248830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.643 qpair failed and we were unable to recover it. 00:36:36.643 [2024-07-25 14:04:33.249102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.643 [2024-07-25 14:04:33.249114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.643 qpair failed and we were unable to recover it. 00:36:36.643 [2024-07-25 14:04:33.249417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.643 [2024-07-25 14:04:33.249429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.643 qpair failed and we were unable to recover it. 00:36:36.643 [2024-07-25 14:04:33.249697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.643 [2024-07-25 14:04:33.249709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.643 qpair failed and we were unable to recover it. 00:36:36.643 [2024-07-25 14:04:33.249886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.643 [2024-07-25 14:04:33.249898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.643 qpair failed and we were unable to recover it. 00:36:36.643 [2024-07-25 14:04:33.250060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.643 [2024-07-25 14:04:33.250073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.643 qpair failed and we were unable to recover it. 00:36:36.643 [2024-07-25 14:04:33.250255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.643 [2024-07-25 14:04:33.250267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.643 qpair failed and we were unable to recover it. 00:36:36.643 [2024-07-25 14:04:33.250523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.643 [2024-07-25 14:04:33.250535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.643 qpair failed and we were unable to recover it. 00:36:36.643 [2024-07-25 14:04:33.250710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.643 [2024-07-25 14:04:33.250728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.643 qpair failed and we were unable to recover it. 00:36:36.643 [2024-07-25 14:04:33.251013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.643 [2024-07-25 14:04:33.251026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.643 qpair failed and we were unable to recover it. 00:36:36.643 [2024-07-25 14:04:33.251206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.643 [2024-07-25 14:04:33.251218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.643 qpair failed and we were unable to recover it. 00:36:36.643 [2024-07-25 14:04:33.251540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.643 [2024-07-25 14:04:33.251552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.643 qpair failed and we were unable to recover it. 00:36:36.643 [2024-07-25 14:04:33.251891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.643 [2024-07-25 14:04:33.251904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.643 qpair failed and we were unable to recover it. 00:36:36.643 [2024-07-25 14:04:33.252082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.643 [2024-07-25 14:04:33.252094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.643 qpair failed and we were unable to recover it. 00:36:36.643 [2024-07-25 14:04:33.252344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.643 [2024-07-25 14:04:33.252356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.643 qpair failed and we were unable to recover it. 00:36:36.643 [2024-07-25 14:04:33.252679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.643 [2024-07-25 14:04:33.252691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.643 qpair failed and we were unable to recover it. 00:36:36.643 [2024-07-25 14:04:33.252938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.643 [2024-07-25 14:04:33.252951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.643 qpair failed and we were unable to recover it. 00:36:36.643 [2024-07-25 14:04:33.253127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.643 [2024-07-25 14:04:33.253139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.643 qpair failed and we were unable to recover it. 00:36:36.643 [2024-07-25 14:04:33.253343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.643 [2024-07-25 14:04:33.253356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.643 qpair failed and we were unable to recover it. 00:36:36.643 [2024-07-25 14:04:33.253672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.643 [2024-07-25 14:04:33.253684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.643 qpair failed and we were unable to recover it. 00:36:36.643 [2024-07-25 14:04:33.253938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.643 [2024-07-25 14:04:33.253951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.643 qpair failed and we were unable to recover it. 00:36:36.643 [2024-07-25 14:04:33.254138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.643 [2024-07-25 14:04:33.254151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.643 qpair failed and we were unable to recover it. 00:36:36.643 [2024-07-25 14:04:33.254450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.643 [2024-07-25 14:04:33.254463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.643 qpair failed and we were unable to recover it. 00:36:36.643 [2024-07-25 14:04:33.254631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.254643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.254884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.254898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.255219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.255232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.255531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.255544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.255775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.255788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.256116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.256128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.256453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.256466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.256707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.256730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.256905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.256918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.257162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.257175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.257521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.257533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.257651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.257663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.257764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.257777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.258009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.258021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.258341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.258353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.258697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.258709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.258872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.258884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.259181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.259194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.259423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.259435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.259594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.259606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.259834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.259847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.260095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.260107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.260430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.260442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.260690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.260703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.261019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.261032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.261209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.261222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.261449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.261462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.261695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.261707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.262012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.262024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.262293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.262306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.262556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.262568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.262662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.644 [2024-07-25 14:04:33.262675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.644 qpair failed and we were unable to recover it. 00:36:36.644 [2024-07-25 14:04:33.262973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.262986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.263283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.263295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.263563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.263575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.263899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.263912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.264153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.264166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.264339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.264351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.264528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.264540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.264776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.264789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.265038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.265051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.265301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.265315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.265614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.265627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.265944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.265957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.266114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.266126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.266300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.266312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.266500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.266512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.266839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.266852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.267105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.267117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.267291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.267303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.267619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.267631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.267744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.267757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.267936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.267948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.268114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.268126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.268463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.268475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.268719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.268732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.268968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.268980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.269276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.269289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.269541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.269553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.269781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.269793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.645 qpair failed and we were unable to recover it. 00:36:36.645 [2024-07-25 14:04:33.270113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.645 [2024-07-25 14:04:33.270126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.646 qpair failed and we were unable to recover it. 00:36:36.646 [2024-07-25 14:04:33.270366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.646 [2024-07-25 14:04:33.270379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.646 qpair failed and we were unable to recover it. 00:36:36.646 [2024-07-25 14:04:33.270626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.646 [2024-07-25 14:04:33.270638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.646 qpair failed and we were unable to recover it. 00:36:36.646 [2024-07-25 14:04:33.270890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.646 [2024-07-25 14:04:33.270902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.646 qpair failed and we were unable to recover it. 00:36:36.646 [2024-07-25 14:04:33.271082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.646 [2024-07-25 14:04:33.271095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.646 qpair failed and we were unable to recover it. 00:36:36.646 [2024-07-25 14:04:33.271418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.646 [2024-07-25 14:04:33.271430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.646 qpair failed and we were unable to recover it. 00:36:36.646 [2024-07-25 14:04:33.271753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.646 [2024-07-25 14:04:33.271766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.646 qpair failed and we were unable to recover it. 00:36:36.646 [2024-07-25 14:04:33.272008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.646 [2024-07-25 14:04:33.272020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.646 qpair failed and we were unable to recover it. 00:36:36.646 [2024-07-25 14:04:33.272266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.646 [2024-07-25 14:04:33.272278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.646 qpair failed and we were unable to recover it. 00:36:36.646 [2024-07-25 14:04:33.272527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.646 [2024-07-25 14:04:33.272539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.646 qpair failed and we were unable to recover it. 00:36:36.646 [2024-07-25 14:04:33.272784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.646 [2024-07-25 14:04:33.272796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.646 qpair failed and we were unable to recover it. 00:36:36.646 [2024-07-25 14:04:33.273027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.646 [2024-07-25 14:04:33.273039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.646 qpair failed and we were unable to recover it. 00:36:36.646 [2024-07-25 14:04:33.273219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.646 [2024-07-25 14:04:33.273231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.646 qpair failed and we were unable to recover it. 00:36:36.646 [2024-07-25 14:04:33.273480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.646 [2024-07-25 14:04:33.273493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.646 qpair failed and we were unable to recover it. 00:36:36.646 [2024-07-25 14:04:33.273684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.646 [2024-07-25 14:04:33.273696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.646 qpair failed and we were unable to recover it. 00:36:36.646 [2024-07-25 14:04:33.273866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.646 [2024-07-25 14:04:33.273878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.646 qpair failed and we were unable to recover it. 00:36:36.646 [2024-07-25 14:04:33.274043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.646 [2024-07-25 14:04:33.274055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.646 qpair failed and we were unable to recover it. 00:36:36.646 [2024-07-25 14:04:33.274229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.646 [2024-07-25 14:04:33.274242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.646 qpair failed and we were unable to recover it. 00:36:36.646 [2024-07-25 14:04:33.274474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.646 [2024-07-25 14:04:33.274486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.646 qpair failed and we were unable to recover it. 00:36:36.646 [2024-07-25 14:04:33.274736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.646 [2024-07-25 14:04:33.274748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.646 qpair failed and we were unable to recover it. 00:36:36.646 [2024-07-25 14:04:33.274927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.646 [2024-07-25 14:04:33.274939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.646 qpair failed and we were unable to recover it. 00:36:36.646 [2024-07-25 14:04:33.275223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.646 [2024-07-25 14:04:33.275237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.646 qpair failed and we were unable to recover it. 00:36:36.646 [2024-07-25 14:04:33.275418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.646 [2024-07-25 14:04:33.275431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.646 qpair failed and we were unable to recover it. 00:36:36.646 [2024-07-25 14:04:33.275702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.646 [2024-07-25 14:04:33.275719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.646 qpair failed and we were unable to recover it. 00:36:36.646 [2024-07-25 14:04:33.275889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.646 [2024-07-25 14:04:33.275902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.646 qpair failed and we were unable to recover it. 00:36:36.646 [2024-07-25 14:04:33.276199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.646 [2024-07-25 14:04:33.276211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.646 qpair failed and we were unable to recover it. 00:36:36.646 [2024-07-25 14:04:33.276482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.647 [2024-07-25 14:04:33.276494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.647 qpair failed and we were unable to recover it. 00:36:36.647 [2024-07-25 14:04:33.276666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.647 [2024-07-25 14:04:33.276678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.647 qpair failed and we were unable to recover it. 00:36:36.647 [2024-07-25 14:04:33.276782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.647 [2024-07-25 14:04:33.276794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.647 qpair failed and we were unable to recover it. 00:36:36.647 [2024-07-25 14:04:33.277043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.647 [2024-07-25 14:04:33.277055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.647 qpair failed and we were unable to recover it. 00:36:36.647 [2024-07-25 14:04:33.277350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.647 [2024-07-25 14:04:33.277362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.647 qpair failed and we were unable to recover it. 00:36:36.647 [2024-07-25 14:04:33.277459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.647 [2024-07-25 14:04:33.277471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.647 qpair failed and we were unable to recover it. 00:36:36.647 [2024-07-25 14:04:33.277793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.647 [2024-07-25 14:04:33.277805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.647 qpair failed and we were unable to recover it. 00:36:36.647 [2024-07-25 14:04:33.278074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.647 [2024-07-25 14:04:33.278086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.647 qpair failed and we were unable to recover it. 00:36:36.647 [2024-07-25 14:04:33.278409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.647 [2024-07-25 14:04:33.278421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.647 qpair failed and we were unable to recover it. 00:36:36.647 [2024-07-25 14:04:33.278693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.647 [2024-07-25 14:04:33.278705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.647 qpair failed and we were unable to recover it. 00:36:36.647 [2024-07-25 14:04:33.279027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.647 [2024-07-25 14:04:33.279040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.647 qpair failed and we were unable to recover it. 00:36:36.647 [2024-07-25 14:04:33.279281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.647 [2024-07-25 14:04:33.279293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.647 qpair failed and we were unable to recover it. 00:36:36.647 [2024-07-25 14:04:33.279535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.647 [2024-07-25 14:04:33.279548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.647 qpair failed and we were unable to recover it. 00:36:36.647 [2024-07-25 14:04:33.279780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.647 [2024-07-25 14:04:33.279793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.647 qpair failed and we were unable to recover it. 00:36:36.647 [2024-07-25 14:04:33.279906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.647 [2024-07-25 14:04:33.279918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.647 qpair failed and we were unable to recover it. 00:36:36.647 [2024-07-25 14:04:33.280107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.647 [2024-07-25 14:04:33.280119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.647 qpair failed and we were unable to recover it. 00:36:36.647 [2024-07-25 14:04:33.280362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.647 [2024-07-25 14:04:33.280374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.647 qpair failed and we were unable to recover it. 00:36:36.647 [2024-07-25 14:04:33.280604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.647 [2024-07-25 14:04:33.280616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.647 qpair failed and we were unable to recover it. 00:36:36.647 [2024-07-25 14:04:33.280864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.647 [2024-07-25 14:04:33.280877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.647 qpair failed and we were unable to recover it. 00:36:36.647 [2024-07-25 14:04:33.281118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.647 [2024-07-25 14:04:33.281131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.647 qpair failed and we were unable to recover it. 00:36:36.647 [2024-07-25 14:04:33.281428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.647 [2024-07-25 14:04:33.281440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.647 qpair failed and we were unable to recover it. 00:36:36.647 [2024-07-25 14:04:33.281686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.647 [2024-07-25 14:04:33.281698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.647 qpair failed and we were unable to recover it. 00:36:36.647 [2024-07-25 14:04:33.281956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.647 [2024-07-25 14:04:33.281968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.647 qpair failed and we were unable to recover it. 00:36:36.647 [2024-07-25 14:04:33.282067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.647 [2024-07-25 14:04:33.282079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.647 qpair failed and we were unable to recover it. 00:36:36.647 [2024-07-25 14:04:33.282245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.647 [2024-07-25 14:04:33.282257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.647 qpair failed and we were unable to recover it. 00:36:36.648 [2024-07-25 14:04:33.282431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.648 [2024-07-25 14:04:33.282443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.648 qpair failed and we were unable to recover it. 00:36:36.648 [2024-07-25 14:04:33.282609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.648 [2024-07-25 14:04:33.282621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.648 qpair failed and we were unable to recover it. 00:36:36.648 [2024-07-25 14:04:33.282928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.648 [2024-07-25 14:04:33.282940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.648 qpair failed and we were unable to recover it. 00:36:36.648 [2024-07-25 14:04:33.283172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.648 [2024-07-25 14:04:33.283184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.648 qpair failed and we were unable to recover it. 00:36:36.648 [2024-07-25 14:04:33.283426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.648 [2024-07-25 14:04:33.283438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.648 qpair failed and we were unable to recover it. 00:36:36.648 [2024-07-25 14:04:33.283530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.648 [2024-07-25 14:04:33.283542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.648 qpair failed and we were unable to recover it. 00:36:36.648 [2024-07-25 14:04:33.283703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.648 [2024-07-25 14:04:33.283719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.648 qpair failed and we were unable to recover it. 00:36:36.648 [2024-07-25 14:04:33.283831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.648 [2024-07-25 14:04:33.283843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.648 qpair failed and we were unable to recover it. 00:36:36.648 [2024-07-25 14:04:33.284144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.648 [2024-07-25 14:04:33.284156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.648 qpair failed and we were unable to recover it. 00:36:36.648 [2024-07-25 14:04:33.284473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.648 [2024-07-25 14:04:33.284486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.648 qpair failed and we were unable to recover it. 00:36:36.648 [2024-07-25 14:04:33.284794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.648 [2024-07-25 14:04:33.284808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.648 qpair failed and we were unable to recover it. 00:36:36.648 [2024-07-25 14:04:33.285111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.648 [2024-07-25 14:04:33.285123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.648 qpair failed and we were unable to recover it. 00:36:36.648 [2024-07-25 14:04:33.285420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.648 [2024-07-25 14:04:33.285432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.648 qpair failed and we were unable to recover it. 00:36:36.648 [2024-07-25 14:04:33.285681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.648 [2024-07-25 14:04:33.285693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.648 qpair failed and we were unable to recover it. 00:36:36.648 [2024-07-25 14:04:33.285937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.648 [2024-07-25 14:04:33.285949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.648 qpair failed and we were unable to recover it. 00:36:36.648 [2024-07-25 14:04:33.286187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.648 [2024-07-25 14:04:33.286199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.648 qpair failed and we were unable to recover it. 00:36:36.648 [2024-07-25 14:04:33.286446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.648 [2024-07-25 14:04:33.286458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.648 qpair failed and we were unable to recover it. 00:36:36.648 [2024-07-25 14:04:33.286615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.648 [2024-07-25 14:04:33.286627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.648 qpair failed and we were unable to recover it. 00:36:36.648 [2024-07-25 14:04:33.286802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.648 [2024-07-25 14:04:33.286815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.648 qpair failed and we were unable to recover it. 00:36:36.648 [2024-07-25 14:04:33.287045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.648 [2024-07-25 14:04:33.287057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.648 qpair failed and we were unable to recover it. 00:36:36.648 [2024-07-25 14:04:33.287287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.648 [2024-07-25 14:04:33.287300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.648 qpair failed and we were unable to recover it. 00:36:36.648 [2024-07-25 14:04:33.287540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.648 [2024-07-25 14:04:33.287552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.648 qpair failed and we were unable to recover it. 00:36:36.648 [2024-07-25 14:04:33.287788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.648 [2024-07-25 14:04:33.287801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.648 qpair failed and we were unable to recover it. 00:36:36.648 [2024-07-25 14:04:33.288106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.648 [2024-07-25 14:04:33.288118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.648 qpair failed and we were unable to recover it. 00:36:36.648 [2024-07-25 14:04:33.288441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.648 [2024-07-25 14:04:33.288454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.648 qpair failed and we were unable to recover it. 00:36:36.648 [2024-07-25 14:04:33.288701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.649 [2024-07-25 14:04:33.288713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.649 qpair failed and we were unable to recover it. 00:36:36.649 [2024-07-25 14:04:33.288988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.649 [2024-07-25 14:04:33.289001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.649 qpair failed and we were unable to recover it. 00:36:36.649 [2024-07-25 14:04:33.289332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.649 [2024-07-25 14:04:33.289345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.649 qpair failed and we were unable to recover it. 00:36:36.649 [2024-07-25 14:04:33.289574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.649 [2024-07-25 14:04:33.289586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.649 qpair failed and we were unable to recover it. 00:36:36.649 [2024-07-25 14:04:33.289839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.649 [2024-07-25 14:04:33.289852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.649 qpair failed and we were unable to recover it. 00:36:36.649 [2024-07-25 14:04:33.290151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.649 [2024-07-25 14:04:33.290163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.649 qpair failed and we were unable to recover it. 00:36:36.649 [2024-07-25 14:04:33.290463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.649 [2024-07-25 14:04:33.290475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.649 qpair failed and we were unable to recover it. 00:36:36.649 [2024-07-25 14:04:33.290720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.649 [2024-07-25 14:04:33.290733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.649 qpair failed and we were unable to recover it. 00:36:36.649 [2024-07-25 14:04:33.291052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.649 [2024-07-25 14:04:33.291065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.649 qpair failed and we were unable to recover it. 00:36:36.649 [2024-07-25 14:04:33.291252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.649 [2024-07-25 14:04:33.291265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.649 qpair failed and we were unable to recover it. 00:36:36.649 [2024-07-25 14:04:33.291513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.649 [2024-07-25 14:04:33.291526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.649 qpair failed and we were unable to recover it. 00:36:36.649 [2024-07-25 14:04:33.291721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.649 [2024-07-25 14:04:33.291734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.649 qpair failed and we were unable to recover it. 00:36:36.649 [2024-07-25 14:04:33.292019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.649 [2024-07-25 14:04:33.292032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.649 qpair failed and we were unable to recover it. 00:36:36.649 [2024-07-25 14:04:33.292331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.649 [2024-07-25 14:04:33.292344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.649 qpair failed and we were unable to recover it. 00:36:36.649 [2024-07-25 14:04:33.292593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.649 [2024-07-25 14:04:33.292607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.649 qpair failed and we were unable to recover it. 00:36:36.649 [2024-07-25 14:04:33.292782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.649 [2024-07-25 14:04:33.292794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.649 qpair failed and we were unable to recover it. 00:36:36.649 [2024-07-25 14:04:33.293031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.649 [2024-07-25 14:04:33.293044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.649 qpair failed and we were unable to recover it. 00:36:36.649 [2024-07-25 14:04:33.293366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.649 [2024-07-25 14:04:33.293378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.649 qpair failed and we were unable to recover it. 00:36:36.649 [2024-07-25 14:04:33.293627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.649 [2024-07-25 14:04:33.293640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.649 qpair failed and we were unable to recover it. 00:36:36.649 [2024-07-25 14:04:33.293881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.649 [2024-07-25 14:04:33.293895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.649 qpair failed and we were unable to recover it. 00:36:36.649 [2024-07-25 14:04:33.294168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.649 [2024-07-25 14:04:33.294180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.649 qpair failed and we were unable to recover it. 00:36:36.649 [2024-07-25 14:04:33.294456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.649 [2024-07-25 14:04:33.294468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.649 qpair failed and we were unable to recover it. 00:36:36.649 [2024-07-25 14:04:33.294644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.649 [2024-07-25 14:04:33.294657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.649 qpair failed and we were unable to recover it. 00:36:36.649 [2024-07-25 14:04:33.294790] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:36:36.649 [2024-07-25 14:04:33.294837] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:36.649 [2024-07-25 14:04:33.294923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.649 [2024-07-25 14:04:33.294935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.649 qpair failed and we were unable to recover it. 00:36:36.649 [2024-07-25 14:04:33.295192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.650 [2024-07-25 14:04:33.295227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.650 qpair failed and we were unable to recover it. 00:36:36.650 [2024-07-25 14:04:33.295509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.650 [2024-07-25 14:04:33.295542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:36.650 qpair failed and we were unable to recover it. 00:36:36.650 [2024-07-25 14:04:33.295811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.650 [2024-07-25 14:04:33.295846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:36.650 qpair failed and we were unable to recover it. 00:36:36.650 [2024-07-25 14:04:33.296198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.650 [2024-07-25 14:04:33.296211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.650 qpair failed and we were unable to recover it. 00:36:36.650 [2024-07-25 14:04:33.296540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.650 [2024-07-25 14:04:33.296552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.650 qpair failed and we were unable to recover it. 00:36:36.650 [2024-07-25 14:04:33.296803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.650 [2024-07-25 14:04:33.296815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.650 qpair failed and we were unable to recover it. 00:36:36.650 [2024-07-25 14:04:33.297005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.650 [2024-07-25 14:04:33.297017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.650 qpair failed and we were unable to recover it. 00:36:36.650 [2024-07-25 14:04:33.297281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.650 [2024-07-25 14:04:33.297293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.650 qpair failed and we were unable to recover it. 00:36:36.650 [2024-07-25 14:04:33.297521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.650 [2024-07-25 14:04:33.297533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.650 qpair failed and we were unable to recover it. 00:36:36.650 [2024-07-25 14:04:33.297830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.650 [2024-07-25 14:04:33.297842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.650 qpair failed and we were unable to recover it. 00:36:36.650 [2024-07-25 14:04:33.297951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.650 [2024-07-25 14:04:33.297963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.650 qpair failed and we were unable to recover it. 00:36:36.650 [2024-07-25 14:04:33.298148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.650 [2024-07-25 14:04:33.298161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.650 qpair failed and we were unable to recover it. 00:36:36.650 [2024-07-25 14:04:33.298391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.650 [2024-07-25 14:04:33.298404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.650 qpair failed and we were unable to recover it. 00:36:36.650 [2024-07-25 14:04:33.298590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.650 [2024-07-25 14:04:33.298604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.650 qpair failed and we were unable to recover it. 00:36:36.650 [2024-07-25 14:04:33.298803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.650 [2024-07-25 14:04:33.298816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.650 qpair failed and we were unable to recover it. 00:36:36.650 [2024-07-25 14:04:33.298903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.650 [2024-07-25 14:04:33.298916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.650 qpair failed and we were unable to recover it. 00:36:36.650 [2024-07-25 14:04:33.299236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.650 [2024-07-25 14:04:33.299248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.650 qpair failed and we were unable to recover it. 00:36:36.650 [2024-07-25 14:04:33.299568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.650 [2024-07-25 14:04:33.299580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.650 qpair failed and we were unable to recover it. 00:36:36.650 [2024-07-25 14:04:33.299878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.650 [2024-07-25 14:04:33.299890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.650 qpair failed and we were unable to recover it. 00:36:36.650 [2024-07-25 14:04:33.300151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.650 [2024-07-25 14:04:33.300163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.650 qpair failed and we were unable to recover it. 00:36:36.651 [2024-07-25 14:04:33.300459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.651 [2024-07-25 14:04:33.300472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.651 qpair failed and we were unable to recover it. 00:36:36.651 [2024-07-25 14:04:33.300647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.651 [2024-07-25 14:04:33.300659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.651 qpair failed and we were unable to recover it. 00:36:36.651 [2024-07-25 14:04:33.300956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.651 [2024-07-25 14:04:33.300969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.651 qpair failed and we were unable to recover it. 00:36:36.651 [2024-07-25 14:04:33.301306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.651 [2024-07-25 14:04:33.301319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.651 qpair failed and we were unable to recover it. 00:36:36.651 [2024-07-25 14:04:33.301547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.651 [2024-07-25 14:04:33.301559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.651 qpair failed and we were unable to recover it. 00:36:36.651 [2024-07-25 14:04:33.301881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.651 [2024-07-25 14:04:33.301893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.651 qpair failed and we were unable to recover it. 00:36:36.651 [2024-07-25 14:04:33.302140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.651 [2024-07-25 14:04:33.302152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.651 qpair failed and we were unable to recover it. 00:36:36.651 [2024-07-25 14:04:33.302451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.651 [2024-07-25 14:04:33.302464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.651 qpair failed and we were unable to recover it. 00:36:36.651 [2024-07-25 14:04:33.302726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.651 [2024-07-25 14:04:33.302739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.651 qpair failed and we were unable to recover it. 00:36:36.651 [2024-07-25 14:04:33.302990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.651 [2024-07-25 14:04:33.303002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.651 qpair failed and we were unable to recover it. 00:36:36.651 [2024-07-25 14:04:33.303248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.651 [2024-07-25 14:04:33.303260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.651 qpair failed and we were unable to recover it. 00:36:36.651 [2024-07-25 14:04:33.303559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.651 [2024-07-25 14:04:33.303571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.651 qpair failed and we were unable to recover it. 00:36:36.651 [2024-07-25 14:04:33.303761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.651 [2024-07-25 14:04:33.303774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.651 qpair failed and we were unable to recover it. 00:36:36.651 [2024-07-25 14:04:33.304076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.651 [2024-07-25 14:04:33.304088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.651 qpair failed and we were unable to recover it. 00:36:36.651 [2024-07-25 14:04:33.304263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.651 [2024-07-25 14:04:33.304275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.651 qpair failed and we were unable to recover it. 00:36:36.651 [2024-07-25 14:04:33.304575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.651 [2024-07-25 14:04:33.304587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.651 qpair failed and we were unable to recover it. 00:36:36.651 [2024-07-25 14:04:33.304701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.651 [2024-07-25 14:04:33.304713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.651 qpair failed and we were unable to recover it. 00:36:36.651 [2024-07-25 14:04:33.304971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.651 [2024-07-25 14:04:33.304984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.651 qpair failed and we were unable to recover it. 00:36:36.651 [2024-07-25 14:04:33.305223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.651 [2024-07-25 14:04:33.305235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.651 qpair failed and we were unable to recover it. 00:36:36.651 [2024-07-25 14:04:33.305467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.651 [2024-07-25 14:04:33.305479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.651 qpair failed and we were unable to recover it. 00:36:36.651 [2024-07-25 14:04:33.305817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.651 [2024-07-25 14:04:33.305841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.651 qpair failed and we were unable to recover it. 00:36:36.651 [2024-07-25 14:04:33.306164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.651 [2024-07-25 14:04:33.306181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.651 qpair failed and we were unable to recover it. 00:36:36.651 [2024-07-25 14:04:33.306449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.651 [2024-07-25 14:04:33.306465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.651 qpair failed and we were unable to recover it. 00:36:36.651 [2024-07-25 14:04:33.306740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.651 [2024-07-25 14:04:33.306758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.651 qpair failed and we were unable to recover it. 00:36:36.651 [2024-07-25 14:04:33.307089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.651 [2024-07-25 14:04:33.307106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.651 qpair failed and we were unable to recover it. 00:36:36.651 [2024-07-25 14:04:33.307347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.307364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.307623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.307640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.307948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.307967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.308119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.308135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.308466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.308483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.308839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.308857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.309118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.309135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.309343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.309360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.309614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.309631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.309908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.309925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.310165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.310182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.310365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.310382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.310646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.310662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.310906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.310921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.311082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.311095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.311338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.311350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.311660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.311672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.311945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.311958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.312202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.312215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.312401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.312413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.312529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.312541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.312784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.312797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.313175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.313196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.313477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.313494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.313739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.313757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.314016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.314033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.314280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.314306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.314637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.314654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.314975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.314988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.652 [2024-07-25 14:04:33.315309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.652 [2024-07-25 14:04:33.315321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.652 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.315579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.315591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.315821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.315833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.316111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.316123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.316445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.316458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.316700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.316712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.316973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.316986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.317232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.317244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.317582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.317594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.317857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.317870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.318167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.318179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.318408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.318420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.318723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.318735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.318976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.318989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.319312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.319325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.319485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.319497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.319818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.319831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.320060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.320072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.320307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.320319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.320658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.320670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.320936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.320948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.321215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.321227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.321477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.321489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.321809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.321822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.322154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.322166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.322485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.322498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.322798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.322811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.323132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.323144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.323398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.323411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.323663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.323675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.323918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.323931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.324110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.324123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.324350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.653 [2024-07-25 14:04:33.324362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.653 qpair failed and we were unable to recover it. 00:36:36.653 [2024-07-25 14:04:33.324619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.324633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.324882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.324894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.325214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.325226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.325421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.325433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.325732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.325745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.326069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.326081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.326376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.326389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.326652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.326664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.326934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.326946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.327244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.327256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.327511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.327523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.327825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.327838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.328074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.328086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.328384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.328396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.328696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.328708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.328962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.328974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.329272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.329285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.329533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.329545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.329779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.329791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.330113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.330126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.330379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.330391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.330635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.330647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.330945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.330957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.331294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.331306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.331573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.331586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.331760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.331773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.332073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.332086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.332411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.332423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 EAL: No free 2048 kB hugepages reported on node 1 00:36:36.654 [2024-07-25 14:04:33.332695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.332708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.333011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.333023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.333323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.333335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.333582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.333594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.333893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.333905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.334142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.334155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.334473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.334485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.334717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.654 [2024-07-25 14:04:33.334730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.654 qpair failed and we were unable to recover it. 00:36:36.654 [2024-07-25 14:04:33.334997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.335010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.655 qpair failed and we were unable to recover it. 00:36:36.655 [2024-07-25 14:04:33.335190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.335202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.655 qpair failed and we were unable to recover it. 00:36:36.655 [2024-07-25 14:04:33.335428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.335441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.655 qpair failed and we were unable to recover it. 00:36:36.655 [2024-07-25 14:04:33.335703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.335719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.655 qpair failed and we were unable to recover it. 00:36:36.655 [2024-07-25 14:04:33.335970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.335983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.655 qpair failed and we were unable to recover it. 00:36:36.655 [2024-07-25 14:04:33.336166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.336179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.655 qpair failed and we were unable to recover it. 00:36:36.655 [2024-07-25 14:04:33.336372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.336384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.655 qpair failed and we were unable to recover it. 00:36:36.655 [2024-07-25 14:04:33.336719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.336732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.655 qpair failed and we were unable to recover it. 00:36:36.655 [2024-07-25 14:04:33.337058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.337071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.655 qpair failed and we were unable to recover it. 00:36:36.655 [2024-07-25 14:04:33.337251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.337263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.655 qpair failed and we were unable to recover it. 00:36:36.655 [2024-07-25 14:04:33.337288] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:36.655 [2024-07-25 14:04:33.337517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.337530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.655 qpair failed and we were unable to recover it. 00:36:36.655 [2024-07-25 14:04:33.337779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.337792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.655 qpair failed and we were unable to recover it. 00:36:36.655 [2024-07-25 14:04:33.338083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.338095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.655 qpair failed and we were unable to recover it. 00:36:36.655 [2024-07-25 14:04:33.338412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.338424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.655 qpair failed and we were unable to recover it. 00:36:36.655 [2024-07-25 14:04:33.338612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.338625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.655 qpair failed and we were unable to recover it. 00:36:36.655 [2024-07-25 14:04:33.338811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.338824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.655 qpair failed and we were unable to recover it. 00:36:36.655 [2024-07-25 14:04:33.339145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.339157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.655 qpair failed and we were unable to recover it. 00:36:36.655 [2024-07-25 14:04:33.339392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.339404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.655 qpair failed and we were unable to recover it. 00:36:36.655 [2024-07-25 14:04:33.339656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.339668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.655 qpair failed and we were unable to recover it. 00:36:36.655 [2024-07-25 14:04:33.339859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.339871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.655 qpair failed and we were unable to recover it. 00:36:36.655 [2024-07-25 14:04:33.340115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.340127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.655 qpair failed and we were unable to recover it. 00:36:36.655 [2024-07-25 14:04:33.340447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.340459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.655 qpair failed and we were unable to recover it. 00:36:36.655 [2024-07-25 14:04:33.340701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.340717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.655 qpair failed and we were unable to recover it. 00:36:36.655 [2024-07-25 14:04:33.340948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.340960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.655 qpair failed and we were unable to recover it. 00:36:36.655 [2024-07-25 14:04:33.341261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.341273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.655 qpair failed and we were unable to recover it. 00:36:36.655 [2024-07-25 14:04:33.341517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.341529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.655 qpair failed and we were unable to recover it. 00:36:36.655 [2024-07-25 14:04:33.341862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.341874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.655 qpair failed and we were unable to recover it. 00:36:36.655 [2024-07-25 14:04:33.342121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.655 [2024-07-25 14:04:33.342133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.342429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.342441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.342751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.342764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.342952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.342966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.343266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.343279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.343479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.343492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.343811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.343824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.344081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.344093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.344271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.344283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.344533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.344545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.344868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.344880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.345215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.345227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.345500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.345513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.345680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.345692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.345940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.345953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.346295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.346307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.346629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.346642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.346936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.346949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.347212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.347225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.347397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.347409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.347706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.347722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.348049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.348061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.348327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.348340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.348567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.348580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.348900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.348912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.349084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.349096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.349323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.349335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.349588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.349600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.349829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.349841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.350164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.350177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.350440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.656 [2024-07-25 14:04:33.350453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.656 qpair failed and we were unable to recover it. 00:36:36.656 [2024-07-25 14:04:33.350654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.350667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.350935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.350947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.351247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.351259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.351579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.351592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.351891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.351903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.352133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.352146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.352385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.352397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.352685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.352698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.352899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.352912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.353106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.353118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.353418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.353431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.353675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.353687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.354012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.354027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.354342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.354354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.354661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.354673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.354942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.354954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.355253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.355265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.355525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.355537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.355779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.355791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.356019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.356031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.356326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.356338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.356655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.356667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.356893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.356906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.357147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.357159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.357409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.357421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.357607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.357620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.357919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.357932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.358200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.358212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.358457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.657 [2024-07-25 14:04:33.358469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.657 qpair failed and we were unable to recover it. 00:36:36.657 [2024-07-25 14:04:33.358788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.358800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.359043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.359056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.359319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.359332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.359670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.359682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.360009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.360022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.360345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.360357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.360656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.360668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.360912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.360925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.361201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.361214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.361460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.361472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.361708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.361726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.361884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.361896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.362148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.362161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.362414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.362426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.362722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.362734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.362982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.362994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.363291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.363304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.363599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.363611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.363855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.363868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.364214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.364226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.364454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.364466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.364764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.364777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.365090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.365102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.365349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.365363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.365609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.365621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.365918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.365931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.366107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.366120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.366350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.366362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.366541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.366553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.366734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.366747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.366993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.367006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.367304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.367317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.367505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.367518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.367858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.367871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.368047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.368059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.368310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.658 [2024-07-25 14:04:33.368322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.658 qpair failed and we were unable to recover it. 00:36:36.658 [2024-07-25 14:04:33.368618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.368630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.368827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.368840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.369041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.369053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.369350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.369363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.369666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.369679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.369931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.369943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.370126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.370138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.370397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.370409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.370664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.370676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.370940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.370953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.371225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.371237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.371500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.371513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.371743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.371755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.371985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.371997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.372272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.372285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.372585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.372597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.372867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.372879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.373187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.373199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.373529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.373541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.373816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.373828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.374059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.374071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.374412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.374424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.374678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.374691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.375015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.375027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.375266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.375278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.375595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.375607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.375848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.375861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.376102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.376116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.376356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.376369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.376611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.376624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.376866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.376878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.377178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.377190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.377486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.659 [2024-07-25 14:04:33.377498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.659 qpair failed and we were unable to recover it. 00:36:36.659 [2024-07-25 14:04:33.377667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.377679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.377932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.377944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.378188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.378201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.378389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.378401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.378662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.378674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.378850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.378862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.379090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.379102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.379340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.379352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.379676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.379689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.380023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.380036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.380224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.380236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.380572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.380584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.380911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.380924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.381168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.381181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.381410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.381423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.381665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.381677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.381870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.381883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.382134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.382146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.382415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.382428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.382608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.382620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.382963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.382976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.383242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.383255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.383480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.383493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.383744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.383756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.383997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.384010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.384259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.384271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.384501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.384513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.384787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.384800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.385031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.385043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.385291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.385303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.660 [2024-07-25 14:04:33.385542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.660 [2024-07-25 14:04:33.385554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.660 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.385801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.385814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.386045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.386058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.386378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.386390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.386558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.386572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.386808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.386820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.387142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.387154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.387288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:36.661 [2024-07-25 14:04:33.387465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.387478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.387708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.387731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.388037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.388050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.388315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.388327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.388606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.388619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.388864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.388877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.389203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.389216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.389462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.389475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.389776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.389790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.390137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.390151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.390379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.390393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.390645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.390658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.390931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.390944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.391272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.391286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.391534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.391547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.391850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.391863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.392110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.392123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.392444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.392458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.392769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.392782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.393014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.393026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.393271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.393284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.393533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.393546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.393813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.393828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.661 qpair failed and we were unable to recover it. 00:36:36.661 [2024-07-25 14:04:33.394015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.661 [2024-07-25 14:04:33.394028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.394263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.394276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.394453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.394466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.394708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.394725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.395026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.395040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.395282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.395295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.395567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.395580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.395845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.395859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.396175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.396190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.396487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.396500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.396753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.396767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.397090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.397104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.397423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.397435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.397758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.397771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.398122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.398135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.398382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.398394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.398716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.398729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.398849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.398861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.399122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.399134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.399474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.399486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.399593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.399606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.399854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.399867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.400209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.400222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.400453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.400465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.400694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.400707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.400970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.400983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.401169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.401182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.401444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.401458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.401683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.401696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.401889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.401902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.402066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.402079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.402260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.402273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.662 qpair failed and we were unable to recover it. 00:36:36.662 [2024-07-25 14:04:33.402456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.662 [2024-07-25 14:04:33.402468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.402788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.402801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.403044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.403056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.403221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.403234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.403504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.403516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.403841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.403855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.404054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.404066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.404339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.404352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.404625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.404637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.404869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.404882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.405181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.405201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.405450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.405466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.405701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.405721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.406044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.406060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.406314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.406328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.406664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.406679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.406855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.406869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.407117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.407131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.407324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.407337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.407679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.407693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.408020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.408034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.408301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.408314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.408538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.408588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.408951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.408978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.409318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.409336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.409610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.409627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.409965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.409982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.410316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.410332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.410593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.410609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.410942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.410960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.411286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.411303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.411583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.411597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.411890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.411903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.412134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.412146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.412388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.412400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.412641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.663 [2024-07-25 14:04:33.412657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.663 qpair failed and we were unable to recover it. 00:36:36.663 [2024-07-25 14:04:33.412925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.412938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.413186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.413198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.413497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.413509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.413832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.413845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.414167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.414180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.414504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.414516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.414767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.414780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.415024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.415037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.415226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.415238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.415483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.415496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.415819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.415832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.416084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.416096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.416412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.416424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.416729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.416742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.416987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.417000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.417228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.417241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.417497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.417510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.417757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.417769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.417926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.417938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.418257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.418269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.418445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.418458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.418756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.418769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.419020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.419032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.419295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.419307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.419547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.419560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.419728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.419740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.419997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.420010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.420256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.420269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.420432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.420444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.420744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.420757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.420999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.421011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.421185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.421198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.421447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.421460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.421781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.421794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.422035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.664 [2024-07-25 14:04:33.422048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.664 qpair failed and we were unable to recover it. 00:36:36.664 [2024-07-25 14:04:33.422351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.422364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.422684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.422696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.423023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.423037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.423305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.423319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.423660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.423673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.423867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.423880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.424205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.424218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.424415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.424427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.424614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.424627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.424890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.424903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.425223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.425236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.425541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.425557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.425809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.425826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.426072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.426090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.426324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.426338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.426639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.426653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.426897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.426902] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:36.665 [2024-07-25 14:04:33.426910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.426926] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:36.665 [2024-07-25 14:04:33.426939] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:36.665 [2024-07-25 14:04:33.426948] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:36.665 [2024-07-25 14:04:33.426955] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:36.665 [2024-07-25 14:04:33.427072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:36:36.665 [2024-07-25 14:04:33.427180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.427193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.427182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:36:36.665 [2024-07-25 14:04:33.427290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:36:36.665 [2024-07-25 14:04:33.427291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:36:36.665 [2024-07-25 14:04:33.427435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.427447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.427639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.427651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.427833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.427846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.428028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.428040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.428283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.428296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.428408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.428420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.428668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.428682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.428873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.428886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.429056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.429069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.429305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.429317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.429571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.429584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.429814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.429827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.430066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.430078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.430181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.665 [2024-07-25 14:04:33.430193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.665 qpair failed and we were unable to recover it. 00:36:36.665 [2024-07-25 14:04:33.430426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.666 [2024-07-25 14:04:33.430439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.666 qpair failed and we were unable to recover it. 00:36:36.666 [2024-07-25 14:04:33.430671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.666 [2024-07-25 14:04:33.430683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.666 qpair failed and we were unable to recover it. 00:36:36.666 [2024-07-25 14:04:33.430923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.666 [2024-07-25 14:04:33.430936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.666 qpair failed and we were unable to recover it. 00:36:36.666 [2024-07-25 14:04:33.431258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.666 [2024-07-25 14:04:33.431270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.666 qpair failed and we were unable to recover it. 00:36:36.666 [2024-07-25 14:04:33.431603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.666 [2024-07-25 14:04:33.431616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.666 qpair failed and we were unable to recover it. 00:36:36.666 [2024-07-25 14:04:33.431815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.666 [2024-07-25 14:04:33.431828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.666 qpair failed and we were unable to recover it. 00:36:36.666 [2024-07-25 14:04:33.432072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.666 [2024-07-25 14:04:33.432084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.666 qpair failed and we were unable to recover it. 00:36:36.666 [2024-07-25 14:04:33.432329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.666 [2024-07-25 14:04:33.432342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.666 qpair failed and we were unable to recover it. 00:36:36.666 [2024-07-25 14:04:33.432506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.666 [2024-07-25 14:04:33.432518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.666 qpair failed and we were unable to recover it. 00:36:36.666 [2024-07-25 14:04:33.432774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.666 [2024-07-25 14:04:33.432788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.666 qpair failed and we were unable to recover it. 00:36:36.666 [2024-07-25 14:04:33.432963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.666 [2024-07-25 14:04:33.432975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.666 qpair failed and we were unable to recover it. 00:36:36.666 [2024-07-25 14:04:33.433241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.666 [2024-07-25 14:04:33.433253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.666 qpair failed and we were unable to recover it. 00:36:36.666 [2024-07-25 14:04:33.433355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.666 [2024-07-25 14:04:33.433367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.666 qpair failed and we were unable to recover it. 00:36:36.666 [2024-07-25 14:04:33.433614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.666 [2024-07-25 14:04:33.433628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.666 qpair failed and we were unable to recover it. 00:36:36.666 [2024-07-25 14:04:33.433896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.666 [2024-07-25 14:04:33.433909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.666 qpair failed and we were unable to recover it. 00:36:36.666 [2024-07-25 14:04:33.434155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.666 [2024-07-25 14:04:33.434167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.666 qpair failed and we were unable to recover it. 00:36:36.666 [2024-07-25 14:04:33.434437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.666 [2024-07-25 14:04:33.434451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.666 qpair failed and we were unable to recover it. 00:36:36.666 [2024-07-25 14:04:33.434643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.666 [2024-07-25 14:04:33.434656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.666 qpair failed and we were unable to recover it. 00:36:36.666 [2024-07-25 14:04:33.434969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.666 [2024-07-25 14:04:33.434982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.666 qpair failed and we were unable to recover it. 00:36:36.666 [2024-07-25 14:04:33.435094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.666 [2024-07-25 14:04:33.435106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.666 qpair failed and we were unable to recover it. 00:36:36.666 [2024-07-25 14:04:33.435289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.666 [2024-07-25 14:04:33.435302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.666 qpair failed and we were unable to recover it. 00:36:36.666 [2024-07-25 14:04:33.435615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.666 [2024-07-25 14:04:33.435627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.666 qpair failed and we were unable to recover it. 00:36:36.666 [2024-07-25 14:04:33.435924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.666 [2024-07-25 14:04:33.435937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.666 qpair failed and we were unable to recover it. 00:36:36.666 [2024-07-25 14:04:33.436120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.666 [2024-07-25 14:04:33.436133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.666 qpair failed and we were unable to recover it. 00:36:36.666 [2024-07-25 14:04:33.436381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.666 [2024-07-25 14:04:33.436394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.666 qpair failed and we were unable to recover it. 00:36:36.666 [2024-07-25 14:04:33.436668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.666 [2024-07-25 14:04:33.436681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.666 qpair failed and we were unable to recover it. 00:36:36.666 [2024-07-25 14:04:33.436852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.666 [2024-07-25 14:04:33.436865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.666 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.437097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.437110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.437356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.437369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.437696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.437709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.438072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.438086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.438390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.438403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.438644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.438657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.438897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.438910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.439165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.439178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.439505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.439519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.439790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.439804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.440063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.440077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.440309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.440324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.440622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.440635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.440886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.440901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.441074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.441087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.441331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.441343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.441532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.441546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.441858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.441872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.442194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.442206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.442476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.442490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.442733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.442748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.443008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.443022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.443264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.443281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.443549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.443564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.443794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.443808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.444112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.444126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.444426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.444439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.444689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.444704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.444956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.444970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.445214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.445227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.445483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.445497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.445818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.445845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.446076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.446089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.446421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.667 [2024-07-25 14:04:33.446435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.667 qpair failed and we were unable to recover it. 00:36:36.667 [2024-07-25 14:04:33.446666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.446679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.447010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.447024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.447263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.447276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.447455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.447467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.447761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.447775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.448097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.448111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.448367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.448380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.448561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.448574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.448751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.448765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.448855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.448868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.449027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.449040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.449338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.449353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.449656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.449669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.449903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.449917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.450100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.450114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.450465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.450499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.450771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.450789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.451028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.451045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.451355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.451372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.451651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.451669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.451909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.451927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.452264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.452282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.452534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.452551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.452838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.452855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.453134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.453150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.453401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.453418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.453671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.453688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.454027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.454045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.454382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.454404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.454657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.454674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.454999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.455017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.455300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.455318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.455628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.455646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05a4000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.455898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.455915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.668 [2024-07-25 14:04:33.456152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.668 [2024-07-25 14:04:33.456169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.668 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.456417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.456431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.456681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.456699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.456995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.457014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.457333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.457352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.457536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.457550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.457851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.457866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.458168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.458183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.458463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.458477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.458658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.458670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.458920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.458934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.459179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.459193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.459424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.459436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.459685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.459698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.459893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.459906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.460206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.460219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.460485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.460498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.460819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.460832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.461088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.461101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.461450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.461464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.461728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.461741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.462027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.462065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.462383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.462401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.462590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.462607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.462904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.462924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.463150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.463167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.463477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.463493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f7b30 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.463829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.463843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.464078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.464090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.464387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.464399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.464629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.669 [2024-07-25 14:04:33.464641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.669 qpair failed and we were unable to recover it. 00:36:36.669 [2024-07-25 14:04:33.464949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.670 [2024-07-25 14:04:33.464961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.670 qpair failed and we were unable to recover it. 00:36:36.670 [2024-07-25 14:04:33.465237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.670 [2024-07-25 14:04:33.465249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.670 qpair failed and we were unable to recover it. 00:36:36.670 [2024-07-25 14:04:33.465445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.670 [2024-07-25 14:04:33.465457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.670 qpair failed and we were unable to recover it. 00:36:36.670 [2024-07-25 14:04:33.465777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.670 [2024-07-25 14:04:33.465790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.670 qpair failed and we were unable to recover it. 00:36:36.670 [2024-07-25 14:04:33.465985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.670 [2024-07-25 14:04:33.465997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.670 qpair failed and we were unable to recover it. 00:36:36.670 [2024-07-25 14:04:33.466186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.670 [2024-07-25 14:04:33.466198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.670 qpair failed and we were unable to recover it. 00:36:36.670 [2024-07-25 14:04:33.466522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.670 [2024-07-25 14:04:33.466534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.670 qpair failed and we were unable to recover it. 00:36:36.670 [2024-07-25 14:04:33.466839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.670 [2024-07-25 14:04:33.466852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.670 qpair failed and we were unable to recover it. 00:36:36.670 [2024-07-25 14:04:33.467095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.670 [2024-07-25 14:04:33.467107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.670 qpair failed and we were unable to recover it. 00:36:36.670 [2024-07-25 14:04:33.467370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.670 [2024-07-25 14:04:33.467383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.670 qpair failed and we were unable to recover it. 00:36:36.670 [2024-07-25 14:04:33.467690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.670 [2024-07-25 14:04:33.467702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.670 qpair failed and we were unable to recover it. 00:36:36.670 [2024-07-25 14:04:33.467969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.670 [2024-07-25 14:04:33.467982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.670 qpair failed and we were unable to recover it. 00:36:36.670 [2024-07-25 14:04:33.468254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.670 [2024-07-25 14:04:33.468266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.670 qpair failed and we were unable to recover it. 00:36:36.670 [2024-07-25 14:04:33.468585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.670 [2024-07-25 14:04:33.468597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.670 qpair failed and we were unable to recover it. 00:36:36.670 [2024-07-25 14:04:33.468924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.670 [2024-07-25 14:04:33.468937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.670 qpair failed and we were unable to recover it. 00:36:36.670 [2024-07-25 14:04:33.469266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.670 [2024-07-25 14:04:33.469279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.670 qpair failed and we were unable to recover it. 00:36:36.670 [2024-07-25 14:04:33.469603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.670 [2024-07-25 14:04:33.469616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.670 qpair failed and we were unable to recover it. 00:36:36.670 [2024-07-25 14:04:33.469926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.670 [2024-07-25 14:04:33.469939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.670 qpair failed and we were unable to recover it. 00:36:36.670 [2024-07-25 14:04:33.470283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.670 [2024-07-25 14:04:33.470295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.670 qpair failed and we were unable to recover it. 00:36:36.670 [2024-07-25 14:04:33.470634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.670 [2024-07-25 14:04:33.470647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.670 qpair failed and we were unable to recover it. 00:36:36.670 [2024-07-25 14:04:33.470877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.670 [2024-07-25 14:04:33.470891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.670 qpair failed and we were unable to recover it. 00:36:36.670 [2024-07-25 14:04:33.471207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.670 [2024-07-25 14:04:33.471222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.670 qpair failed and we were unable to recover it. 00:36:36.670 [2024-07-25 14:04:33.471549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.670 [2024-07-25 14:04:33.471564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.670 qpair failed and we were unable to recover it. 00:36:36.670 [2024-07-25 14:04:33.471891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.670 [2024-07-25 14:04:33.471908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.670 qpair failed and we were unable to recover it. 00:36:36.670 [2024-07-25 14:04:33.472213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.670 [2024-07-25 14:04:33.472227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.670 qpair failed and we were unable to recover it. 00:36:36.670 [2024-07-25 14:04:33.472565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.670 [2024-07-25 14:04:33.472583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.670 qpair failed and we were unable to recover it. 00:36:36.670 [2024-07-25 14:04:33.472906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.670 [2024-07-25 14:04:33.472925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.670 qpair failed and we were unable to recover it. 00:36:36.671 [2024-07-25 14:04:33.473273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.671 [2024-07-25 14:04:33.473287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.671 qpair failed and we were unable to recover it. 00:36:36.671 [2024-07-25 14:04:33.473598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.671 [2024-07-25 14:04:33.473615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.671 qpair failed and we were unable to recover it. 00:36:36.671 [2024-07-25 14:04:33.473939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.671 [2024-07-25 14:04:33.473954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.671 qpair failed and we were unable to recover it. 00:36:36.671 [2024-07-25 14:04:33.474198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.671 [2024-07-25 14:04:33.474214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.671 qpair failed and we were unable to recover it. 00:36:36.671 [2024-07-25 14:04:33.474465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.671 [2024-07-25 14:04:33.474477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.671 qpair failed and we were unable to recover it. 00:36:36.671 [2024-07-25 14:04:33.474792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.671 [2024-07-25 14:04:33.474805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.671 qpair failed and we were unable to recover it. 00:36:36.671 [2024-07-25 14:04:33.474987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.671 [2024-07-25 14:04:33.475000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.671 qpair failed and we were unable to recover it. 00:36:36.671 [2024-07-25 14:04:33.475314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.671 [2024-07-25 14:04:33.475327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.671 qpair failed and we were unable to recover it. 00:36:36.671 [2024-07-25 14:04:33.475568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.671 [2024-07-25 14:04:33.475582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.671 qpair failed and we were unable to recover it. 00:36:36.671 [2024-07-25 14:04:33.475879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.671 [2024-07-25 14:04:33.475892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.671 qpair failed and we were unable to recover it. 00:36:36.671 [2024-07-25 14:04:33.476190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.671 [2024-07-25 14:04:33.476203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.671 qpair failed and we were unable to recover it. 00:36:36.671 [2024-07-25 14:04:33.476452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.671 [2024-07-25 14:04:33.476465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.671 qpair failed and we were unable to recover it. 00:36:36.671 [2024-07-25 14:04:33.476784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.671 [2024-07-25 14:04:33.476797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.671 qpair failed and we were unable to recover it. 00:36:36.671 [2024-07-25 14:04:33.477103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.671 [2024-07-25 14:04:33.477117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.671 qpair failed and we were unable to recover it. 00:36:36.671 [2024-07-25 14:04:33.477435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.671 [2024-07-25 14:04:33.477448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.671 qpair failed and we were unable to recover it. 00:36:36.671 [2024-07-25 14:04:33.477767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.671 [2024-07-25 14:04:33.477780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.671 qpair failed and we were unable to recover it. 00:36:36.671 [2024-07-25 14:04:33.478046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.671 [2024-07-25 14:04:33.478059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.671 qpair failed and we were unable to recover it. 00:36:36.671 [2024-07-25 14:04:33.478394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.671 [2024-07-25 14:04:33.478407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.671 qpair failed and we were unable to recover it. 00:36:36.671 [2024-07-25 14:04:33.478766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.671 [2024-07-25 14:04:33.478779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.671 qpair failed and we were unable to recover it. 00:36:36.671 [2024-07-25 14:04:33.479034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.671 [2024-07-25 14:04:33.479047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.671 qpair failed and we were unable to recover it. 00:36:36.671 [2024-07-25 14:04:33.479367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.671 [2024-07-25 14:04:33.479380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.671 qpair failed and we were unable to recover it. 00:36:36.671 [2024-07-25 14:04:33.479698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.671 [2024-07-25 14:04:33.479711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.671 qpair failed and we were unable to recover it. 00:36:36.671 [2024-07-25 14:04:33.480044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.671 [2024-07-25 14:04:33.480057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.671 qpair failed and we were unable to recover it. 00:36:36.671 [2024-07-25 14:04:33.480298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.671 [2024-07-25 14:04:33.480310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.671 qpair failed and we were unable to recover it. 00:36:36.671 [2024-07-25 14:04:33.480619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.672 [2024-07-25 14:04:33.480632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.672 qpair failed and we were unable to recover it. 00:36:36.672 [2024-07-25 14:04:33.480893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.672 [2024-07-25 14:04:33.480907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.672 qpair failed and we were unable to recover it. 00:36:36.672 [2024-07-25 14:04:33.481114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.672 [2024-07-25 14:04:33.481127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.672 qpair failed and we were unable to recover it. 00:36:36.672 [2024-07-25 14:04:33.481445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.672 [2024-07-25 14:04:33.481458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.672 qpair failed and we were unable to recover it. 00:36:36.672 [2024-07-25 14:04:33.481785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.672 [2024-07-25 14:04:33.481798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.672 qpair failed and we were unable to recover it. 00:36:36.672 [2024-07-25 14:04:33.482107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.672 [2024-07-25 14:04:33.482121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.672 qpair failed and we were unable to recover it. 00:36:36.672 [2024-07-25 14:04:33.482393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.672 [2024-07-25 14:04:33.482406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.672 qpair failed and we were unable to recover it. 00:36:36.672 [2024-07-25 14:04:33.482733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.672 [2024-07-25 14:04:33.482745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.672 qpair failed and we were unable to recover it. 00:36:36.672 [2024-07-25 14:04:33.483070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.672 [2024-07-25 14:04:33.483082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.672 qpair failed and we were unable to recover it. 00:36:36.672 [2024-07-25 14:04:33.483334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.672 [2024-07-25 14:04:33.483346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.672 qpair failed and we were unable to recover it. 00:36:36.672 [2024-07-25 14:04:33.483647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.672 [2024-07-25 14:04:33.483659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.672 qpair failed and we were unable to recover it. 00:36:36.672 [2024-07-25 14:04:33.483957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.672 [2024-07-25 14:04:33.483969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.672 qpair failed and we were unable to recover it. 00:36:36.672 [2024-07-25 14:04:33.484290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.672 [2024-07-25 14:04:33.484302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.672 qpair failed and we were unable to recover it. 00:36:36.672 [2024-07-25 14:04:33.484598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.672 [2024-07-25 14:04:33.484611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.672 qpair failed and we were unable to recover it. 00:36:36.672 [2024-07-25 14:04:33.484852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.672 [2024-07-25 14:04:33.484865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.672 qpair failed and we were unable to recover it. 00:36:36.672 [2024-07-25 14:04:33.485091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.672 [2024-07-25 14:04:33.485103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.672 qpair failed and we were unable to recover it. 00:36:36.672 [2024-07-25 14:04:33.485359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.672 [2024-07-25 14:04:33.485372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.672 qpair failed and we were unable to recover it. 00:36:36.672 [2024-07-25 14:04:33.485691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.672 [2024-07-25 14:04:33.485703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.672 qpair failed and we were unable to recover it. 00:36:36.672 [2024-07-25 14:04:33.485966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.673 [2024-07-25 14:04:33.485979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.673 qpair failed and we were unable to recover it. 00:36:36.673 [2024-07-25 14:04:33.486290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.673 [2024-07-25 14:04:33.486305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.673 qpair failed and we were unable to recover it. 00:36:36.673 [2024-07-25 14:04:33.486565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.673 [2024-07-25 14:04:33.486577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.673 qpair failed and we were unable to recover it. 00:36:36.673 [2024-07-25 14:04:33.486898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.673 [2024-07-25 14:04:33.486911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.673 qpair failed and we were unable to recover it. 00:36:36.673 [2024-07-25 14:04:33.487173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.673 [2024-07-25 14:04:33.487185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.673 qpair failed and we were unable to recover it. 00:36:36.673 [2024-07-25 14:04:33.487517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.673 [2024-07-25 14:04:33.487529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.673 qpair failed and we were unable to recover it. 00:36:36.673 [2024-07-25 14:04:33.487853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.673 [2024-07-25 14:04:33.487865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.673 qpair failed and we were unable to recover it. 00:36:36.673 [2024-07-25 14:04:33.488195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.673 [2024-07-25 14:04:33.488207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.673 qpair failed and we were unable to recover it. 00:36:36.673 [2024-07-25 14:04:33.488535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.673 [2024-07-25 14:04:33.488547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.673 qpair failed and we were unable to recover it. 00:36:36.673 [2024-07-25 14:04:33.488871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.673 [2024-07-25 14:04:33.488883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.673 qpair failed and we were unable to recover it. 00:36:36.673 [2024-07-25 14:04:33.489213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.673 [2024-07-25 14:04:33.489226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.673 qpair failed and we were unable to recover it. 00:36:36.673 [2024-07-25 14:04:33.489455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.673 [2024-07-25 14:04:33.489467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.673 qpair failed and we were unable to recover it. 00:36:36.673 [2024-07-25 14:04:33.489647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.673 [2024-07-25 14:04:33.489659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.673 qpair failed and we were unable to recover it. 00:36:36.673 [2024-07-25 14:04:33.489888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.673 [2024-07-25 14:04:33.489901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.673 qpair failed and we were unable to recover it. 00:36:36.673 [2024-07-25 14:04:33.490153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.673 [2024-07-25 14:04:33.490165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.673 qpair failed and we were unable to recover it. 00:36:36.673 [2024-07-25 14:04:33.490484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.673 [2024-07-25 14:04:33.490496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.673 qpair failed and we were unable to recover it. 00:36:36.673 [2024-07-25 14:04:33.490728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.673 [2024-07-25 14:04:33.490741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.673 qpair failed and we were unable to recover it. 00:36:36.673 [2024-07-25 14:04:33.490972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.673 [2024-07-25 14:04:33.490985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.673 qpair failed and we were unable to recover it. 00:36:36.673 [2024-07-25 14:04:33.491243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.673 [2024-07-25 14:04:33.491255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.673 qpair failed and we were unable to recover it. 00:36:36.673 [2024-07-25 14:04:33.491503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.673 [2024-07-25 14:04:33.491515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.673 qpair failed and we were unable to recover it. 00:36:36.673 [2024-07-25 14:04:33.491885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.673 [2024-07-25 14:04:33.491898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.673 qpair failed and we were unable to recover it. 00:36:36.673 [2024-07-25 14:04:33.492222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.673 [2024-07-25 14:04:33.492234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.673 qpair failed and we were unable to recover it. 00:36:36.673 [2024-07-25 14:04:33.492553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.673 [2024-07-25 14:04:33.492565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.673 qpair failed and we were unable to recover it. 00:36:36.673 [2024-07-25 14:04:33.492894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.673 [2024-07-25 14:04:33.492906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.673 qpair failed and we were unable to recover it. 00:36:36.673 [2024-07-25 14:04:33.493159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.673 [2024-07-25 14:04:33.493171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.673 qpair failed and we were unable to recover it. 00:36:36.673 [2024-07-25 14:04:33.493473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.673 [2024-07-25 14:04:33.493485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.673 qpair failed and we were unable to recover it. 00:36:36.673 [2024-07-25 14:04:33.493684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.493696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.494023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.494036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.494287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.494299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.494597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.494609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.494951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.494963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.495211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.495223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.495495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.495507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.495803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.495816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.496137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.496150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.496446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.496458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.496780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.496792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.497109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.497122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.497458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.497470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.497793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.497805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.498064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.498077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.498413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.498427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.498690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.498702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.499035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.499047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.499280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.499292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.499554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.499566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.499867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.499880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.500201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.500213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.500535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.500547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.500882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.500895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.501194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.501206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.501475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.501488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.501812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.501825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.502133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.502145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.502400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.502412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.502730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.674 [2024-07-25 14:04:33.502743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.674 qpair failed and we were unable to recover it. 00:36:36.674 [2024-07-25 14:04:33.502989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.675 [2024-07-25 14:04:33.503001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.675 qpair failed and we were unable to recover it. 00:36:36.675 [2024-07-25 14:04:33.503308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.675 [2024-07-25 14:04:33.503320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.675 qpair failed and we were unable to recover it. 00:36:36.675 [2024-07-25 14:04:33.503658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.675 [2024-07-25 14:04:33.503670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.675 qpair failed and we were unable to recover it. 00:36:36.675 [2024-07-25 14:04:33.503988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.675 [2024-07-25 14:04:33.504001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.675 qpair failed and we were unable to recover it. 00:36:36.675 [2024-07-25 14:04:33.504297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.675 [2024-07-25 14:04:33.504309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.675 qpair failed and we were unable to recover it. 00:36:36.675 [2024-07-25 14:04:33.504504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.675 [2024-07-25 14:04:33.504517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.675 qpair failed and we were unable to recover it. 00:36:36.675 [2024-07-25 14:04:33.504699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.675 [2024-07-25 14:04:33.504711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.675 qpair failed and we were unable to recover it. 00:36:36.675 [2024-07-25 14:04:33.504943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.675 [2024-07-25 14:04:33.504955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.675 qpair failed and we were unable to recover it. 00:36:36.675 [2024-07-25 14:04:33.505186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.675 [2024-07-25 14:04:33.505198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.675 qpair failed and we were unable to recover it. 00:36:36.675 [2024-07-25 14:04:33.505542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.675 [2024-07-25 14:04:33.505554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.675 qpair failed and we were unable to recover it. 00:36:36.675 [2024-07-25 14:04:33.505871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.675 [2024-07-25 14:04:33.505883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.675 qpair failed and we were unable to recover it. 00:36:36.675 [2024-07-25 14:04:33.506216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.675 [2024-07-25 14:04:33.506228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.675 qpair failed and we were unable to recover it. 00:36:36.675 [2024-07-25 14:04:33.506554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.675 [2024-07-25 14:04:33.506566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.675 qpair failed and we were unable to recover it. 00:36:36.675 [2024-07-25 14:04:33.506896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.675 [2024-07-25 14:04:33.506908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.675 qpair failed and we were unable to recover it. 00:36:36.675 [2024-07-25 14:04:33.507098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.675 [2024-07-25 14:04:33.507111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.675 qpair failed and we were unable to recover it. 00:36:36.675 [2024-07-25 14:04:33.507414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.675 [2024-07-25 14:04:33.507426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.675 qpair failed and we were unable to recover it. 00:36:36.675 [2024-07-25 14:04:33.507751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.675 [2024-07-25 14:04:33.507764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.675 qpair failed and we were unable to recover it. 00:36:36.675 [2024-07-25 14:04:33.508082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.675 [2024-07-25 14:04:33.508095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.675 qpair failed and we were unable to recover it. 00:36:36.675 [2024-07-25 14:04:33.508370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.675 [2024-07-25 14:04:33.508382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.675 qpair failed and we were unable to recover it. 00:36:36.675 [2024-07-25 14:04:33.508567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.675 [2024-07-25 14:04:33.508580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.675 qpair failed and we were unable to recover it. 00:36:36.675 [2024-07-25 14:04:33.508852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.675 [2024-07-25 14:04:33.508865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.675 qpair failed and we were unable to recover it. 00:36:36.949 [2024-07-25 14:04:33.509201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.949 [2024-07-25 14:04:33.509215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.949 qpair failed and we were unable to recover it. 00:36:36.949 [2024-07-25 14:04:33.509462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.949 [2024-07-25 14:04:33.509476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.949 qpair failed and we were unable to recover it. 00:36:36.949 [2024-07-25 14:04:33.509796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.949 [2024-07-25 14:04:33.509809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.949 qpair failed and we were unable to recover it. 00:36:36.949 [2024-07-25 14:04:33.510074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.949 [2024-07-25 14:04:33.510086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.949 qpair failed and we were unable to recover it. 00:36:36.949 [2024-07-25 14:04:33.510331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.949 [2024-07-25 14:04:33.510346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.949 qpair failed and we were unable to recover it. 00:36:36.949 [2024-07-25 14:04:33.510647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.949 [2024-07-25 14:04:33.510659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.949 qpair failed and we were unable to recover it. 00:36:36.949 [2024-07-25 14:04:33.510904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.949 [2024-07-25 14:04:33.510916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.949 qpair failed and we were unable to recover it. 00:36:36.949 [2024-07-25 14:04:33.511238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.949 [2024-07-25 14:04:33.511251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.949 qpair failed and we were unable to recover it. 00:36:36.949 [2024-07-25 14:04:33.511526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.949 [2024-07-25 14:04:33.511538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.949 qpair failed and we were unable to recover it. 00:36:36.949 [2024-07-25 14:04:33.511864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.949 [2024-07-25 14:04:33.511877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.949 qpair failed and we were unable to recover it. 00:36:36.949 [2024-07-25 14:04:33.512132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.949 [2024-07-25 14:04:33.512145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.949 qpair failed and we were unable to recover it. 00:36:36.949 [2024-07-25 14:04:33.512467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.949 [2024-07-25 14:04:33.512480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.949 qpair failed and we were unable to recover it. 00:36:36.949 [2024-07-25 14:04:33.512793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.949 [2024-07-25 14:04:33.512806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.949 qpair failed and we were unable to recover it. 00:36:36.949 [2024-07-25 14:04:33.513075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.949 [2024-07-25 14:04:33.513088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.949 qpair failed and we were unable to recover it. 00:36:36.949 [2024-07-25 14:04:33.513336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.513348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.513617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.513629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.513812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.513825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.514075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.514087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.514385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.514398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.514693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.514705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.515036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.515048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.515368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.515380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.515678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.515690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.516015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.516028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.516209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.516221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.516532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.516544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.516866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.516879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.517202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.517214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.517548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.517560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.517724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.517737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.517978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.517990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.518224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.518236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.518494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.518506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.518703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.518718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.518973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.518985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.519237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.519249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.519589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.519601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.519899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.519912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.520232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.520245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.520545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.520557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.520787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.520800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.521050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.521062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.521316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.521328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.521626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.521638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.521892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.521906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.522131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.522143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.522486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.522499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.522820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.522832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.523153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.523165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.523405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.523418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.523726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.523739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.524063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.524075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.524315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.524327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.524575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.524587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.524885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.524897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.950 [2024-07-25 14:04:33.525197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.950 [2024-07-25 14:04:33.525209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.950 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.525530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.525542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.525875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.525887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.526210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.526222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.526525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.526537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.526857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.526869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.527191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.527203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.527533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.527545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.527787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.527799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.528104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.528116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.528413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.528425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.528753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.528765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.529082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.529094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.529435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.529447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.529804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.529817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.530070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.530082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.530320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.530332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.530645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.530657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.530980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.530993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.531242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.531254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.531487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.531499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.531825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.531837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.532090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.532102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.532369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.532381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.532680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.532692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.533024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.533037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.533288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.533300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.533625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.533637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.533900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.533913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.534250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.534264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.534584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.534596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.534958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.534970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.535220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.535232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.535533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.535545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.535772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.535784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.536028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.536040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.536364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.536377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.536623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.951 [2024-07-25 14:04:33.536635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.951 qpair failed and we were unable to recover it. 00:36:36.951 [2024-07-25 14:04:33.536956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.536968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.537240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.537252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.537558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.537570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.537815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.537828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.538127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.538139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.538460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.538473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.538703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.538726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.539032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.539044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.539392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.539404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.539643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.539655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.539989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.540001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.540302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.540314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.540566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.540579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.540898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.540911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.541166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.541178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.541496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.541508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.541777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.541790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.542031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.542043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.542364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.542376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.542567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.542579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.542884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.542897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.543222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.543234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.543554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.543566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.543876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.543889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.544138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.544150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.544392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.544404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.544729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.544742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.545042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.545054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.545352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.545364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.545621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.545634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.545876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.545888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.546080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.546094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.546390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.546402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.546700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.546712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.547060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.547072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.547420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.547432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.547779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.547792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.952 [2024-07-25 14:04:33.548136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.952 [2024-07-25 14:04:33.548149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.952 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.548456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.548469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.548769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.548782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.549128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.549140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.549481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.549493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.549838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.549858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.550158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.550170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.550422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.550435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.550755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.550768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.551071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.551083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.551403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.551415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.551669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.551682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.552006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.552019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.552340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.552352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.552615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.552627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.552815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.552828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.553002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.553014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.553333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.553345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.553593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.553605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.553920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.553932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.554243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.554255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.554581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.554593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.554914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.554926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.555264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.555276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.555473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.555485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.555740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.555752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.556119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.556131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.556429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.556441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.556757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.556769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.557093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.557106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.557377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.557390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.557722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.557735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.558058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.558070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.558396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.558408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.558667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.558681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.558941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.558954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.559260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.953 [2024-07-25 14:04:33.559273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.953 qpair failed and we were unable to recover it. 00:36:36.953 [2024-07-25 14:04:33.559572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.954 [2024-07-25 14:04:33.559584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.954 qpair failed and we were unable to recover it. 00:36:36.954 [2024-07-25 14:04:33.559902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.954 [2024-07-25 14:04:33.559914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.954 qpair failed and we were unable to recover it. 00:36:36.954 [2024-07-25 14:04:33.560210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.954 [2024-07-25 14:04:33.560222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.954 qpair failed and we were unable to recover it. 00:36:36.954 [2024-07-25 14:04:33.560465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.954 [2024-07-25 14:04:33.560478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.954 qpair failed and we were unable to recover it. 00:36:36.954 [2024-07-25 14:04:33.560810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.954 [2024-07-25 14:04:33.560823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.954 qpair failed and we were unable to recover it. 00:36:36.954 [2024-07-25 14:04:33.561115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.954 [2024-07-25 14:04:33.561128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.561447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.561460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.561771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.561783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.562127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.562139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.562482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.562495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.562747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.562760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.563024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.563037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.563333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.563345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.563645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.563657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.564007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.564020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.564216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.564228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.564392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.564404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.564729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.564742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.564982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.564994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.565291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.565303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.565612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.565624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.565956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.565969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.566239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.566251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.566525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.566537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.566769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.566782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.567031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.567043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.567368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.567380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.567582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.567594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.567863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.567875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.568192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.568204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.568465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.568478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.568718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.568731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.568996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.569008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.569332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.569344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.569651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.569663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.569907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.569920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.570245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.570257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.570488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.570502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.570818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.570830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.571159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.571172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.571398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.571411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.571673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.571685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.955 [2024-07-25 14:04:33.571881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.955 [2024-07-25 14:04:33.571893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.955 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.572201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.572213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.572528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.572540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.572770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.572782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.573094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.573106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.573408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.573420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.573754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.573766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.574084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.574096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.574341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.574353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.574657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.574670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.574897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.574910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.575177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.575189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.575416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.575428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.575670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.575683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.575997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.576010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.576238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.576250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.576559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.576572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.576898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.576910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.577225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.577237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.577499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.577511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.577847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.577860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.578218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.578230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.578531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.578543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.578708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.578730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.579071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.579083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.579359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.579371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.579624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.579637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.579960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.579973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.580297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.580309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.580629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.580641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.580959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.580972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.581224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.581236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.581534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.581547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.581778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.581791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.582019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.582031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.582356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.582369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.582626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.582637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.582976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.956 [2024-07-25 14:04:33.582988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.956 qpair failed and we were unable to recover it. 00:36:36.956 [2024-07-25 14:04:33.583311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.583323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.583658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.583670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.583995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.584007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.584337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.584349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.584659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.584672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.584941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.584953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.585272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.585284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.585484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.585496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.585812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.585825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.586072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.586084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.586268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.586280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.586579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.586591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.586852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.586864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.587161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.587173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.587500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.587512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.587765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.587778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.588120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.588132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.588485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.588497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.588745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.588757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.588992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.589004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.589329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.589341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.589595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.589608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.589920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.589933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.590276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.590288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.590602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.590646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.590971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.590991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.591323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.591340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.591537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.591554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.591884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.591901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.592258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.592275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.592613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.592629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.592942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.592959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.593265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.593281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.593540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.593556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.593807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.593824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.594180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.594196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.594465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.594482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.957 [2024-07-25 14:04:33.594815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.957 [2024-07-25 14:04:33.594836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:36.957 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.595148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.595164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.595438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.595452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.595794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.595807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.596149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.596161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.596426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.596438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.596760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.596772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.597028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.597040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.597282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.597294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.597591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.597604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.597850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.597863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.598107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.598120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.598440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.598452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.598775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.598787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.599119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.599131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.599455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.599467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.599800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.599813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.600086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.600099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.600347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.600359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.600667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.600679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.600942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.600955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.601198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.601210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.601391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.601403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.601721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.601733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.601966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.601979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.602230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.602243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.602560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.602573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.602905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.602917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.603189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.603201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.603385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.603397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.603645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.603657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.603926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.603939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.604285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.604297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.604593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.604605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.604940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.604953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.605275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.605287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.605621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.605633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.958 [2024-07-25 14:04:33.605906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.958 [2024-07-25 14:04:33.605918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.958 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.606241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.606254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.606459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.606471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.606747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.606759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.607083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.607095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.607393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.607405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.607728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.607742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.608060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.608072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.608402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.608415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.608743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.608756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.609083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.609095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.609422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.609434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.609757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.609769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.610104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.610116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.610366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.610379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.610699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.610711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.611041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.611053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.611384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.611396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.611722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.611734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.612055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.612068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.612393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.612405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.612600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.612612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.612937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.612950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.613252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.613264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.613564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.613576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.613896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.613908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.614205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.614217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.614467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.614479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.614800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.614812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.615129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.615141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.615460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.615481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.615802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.615815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.616049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.616062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.959 qpair failed and we were unable to recover it. 00:36:36.959 [2024-07-25 14:04:33.616375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.959 [2024-07-25 14:04:33.616388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.616707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.616724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.617046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.617058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.617387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.617399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.617655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.617667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.617942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.617955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.618282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.618294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.618614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.618626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.618938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.618951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.619272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.619284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.619607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.619619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.619879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.619891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.620210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.620222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.620530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.620543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.620786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.620799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.621066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.621078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.621385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.621397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.621630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.621643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.621962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.621975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.622292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.622304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.622668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.622680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.622871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.622884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.623207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.623219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.623507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.623519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.623842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.623855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.624153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.624165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.624484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.624496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.624749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.624762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.625065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.625077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.625397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.625409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.625730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.625743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.626078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.626091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.626359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.626371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.626691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.626704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.627072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.627084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.627382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.627394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.627653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.627665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.960 [2024-07-25 14:04:33.627940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.960 [2024-07-25 14:04:33.627954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.960 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.628222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.628234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.628477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.628490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.628796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.628809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.629065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.629077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.629396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.629408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.629648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.629660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.629988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.630000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.630320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.630332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.630633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.630646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.630894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.630906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.631204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.631216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.631514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.631526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.631847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.631859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.632169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.632181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.632503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.632515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.632836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.632848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.633096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.633108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.633355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.633367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.633632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.633644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.633946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.633959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.634152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.634164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.634422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.634434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.634683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.634695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.634951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.634964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.635207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.635219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.635524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.635536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.635805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.635818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.636151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.636163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.636487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.636500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.636756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.636769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.637096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.637109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.637429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.637441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.637698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.637710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.638031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.638044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.638360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.638372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.638711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.638735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.961 [2024-07-25 14:04:33.638978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.961 [2024-07-25 14:04:33.638991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.961 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.639315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.639328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.639650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.639662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.639844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.639858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.640152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.640164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.640414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.640426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.640723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.640736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.640997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.641009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.641305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.641318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.641560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.641573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.641892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.641904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.642223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.642235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.642573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.642585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.642906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.642919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.643163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.643175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.643475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.643487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.643743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.643756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.644005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.644017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.644339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.644351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.644552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.644564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.644884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.644897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.645233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.645245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.645539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.645551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.645791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.645804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.645994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.646006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.646253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.646265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.646564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.646577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.646903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.646916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.647235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.647247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.647578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.647590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.647910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.647923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.648244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.648256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.648502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.648514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.648832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.648845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.649096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.649108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.649412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.649424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.649656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.649668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.649910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.962 [2024-07-25 14:04:33.649923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.962 qpair failed and we were unable to recover it. 00:36:36.962 [2024-07-25 14:04:33.650241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.650253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.650496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.650509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.650814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.650827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.651150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.651162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.651479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.651491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.651752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.651767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.652033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.652045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.652362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.652374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.652695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.652707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.653033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.653045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.653244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.653256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.653506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.653518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.653855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.653873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.654221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.654233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.654482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.654494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.654738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.654750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.654924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.654937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.655195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.655207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.655401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.655413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.655760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.655773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.656123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.656135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.656363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.656375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.656674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.656687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.656929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.656942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.657200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.657212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.657389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.657401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.657664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.657677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.658021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.658033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.658332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.658344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.658663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.658676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.658996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.659009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.659344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.659356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.659602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.659615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.659916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.659929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.660200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.660213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.660458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.660470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.660774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.660787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.963 [2024-07-25 14:04:33.661056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.963 [2024-07-25 14:04:33.661068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.963 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.661352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.661364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.661640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.661652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.661974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.661986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.662318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.662330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.662580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.662592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.662862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.662874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.663108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.663120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.663432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.663446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.663692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.663704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.664031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.664043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.664320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.664332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.664653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.664666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.664938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.664950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.665184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.665196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.665424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.665436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.665624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.665636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.665958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.665971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.666158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.666171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.666475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.666487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.666757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.666769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.667073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.667086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.667407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.667420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.667719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.667732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.668053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.668065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.668393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.668406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.668729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.668742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.668997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.669009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.669327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.669339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.669680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.669693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.669993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.670005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.670192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.670204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.670502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.964 [2024-07-25 14:04:33.670514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.964 qpair failed and we were unable to recover it. 00:36:36.964 [2024-07-25 14:04:33.670713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.670729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.671051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.671064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.671309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.671321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.671645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.671657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.671921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.671933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.672267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.672280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.672579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.672591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.672833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.672846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.673143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.673155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.673479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.673491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.673794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.673806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.674053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.674065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.674259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.674271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.674590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.674603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.674870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.674882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.675211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.675225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.675470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.675482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.675787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.675799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.676097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.676109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.676290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.676303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.676557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.676570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.676894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.676906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.677237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.677250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.677572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.677585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.677918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.677931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.678251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.678263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.678506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.678518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.678821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.678834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.679154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.679166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.679491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.679503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.679836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.679848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.680121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.680133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.680369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.680382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.680694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.680707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.680962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.680974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.681219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.681231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.965 [2024-07-25 14:04:33.681463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.965 [2024-07-25 14:04:33.681475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.965 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.681746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.681759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.682075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.682087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.682427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.682440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.682755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.682768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.683033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.683045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.683279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.683292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.683555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.683568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.683889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.683901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.684238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.684251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.684498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.684510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.684829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.684842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.685096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.685108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.685448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.685461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.685734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.685746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.686080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.686092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.686366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.686378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.686629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.686641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.686965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.686978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.687241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.687255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.687483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.687496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.687744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.687757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.688002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.688015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.688337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.688349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.688653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.688665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.688982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.688995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.689297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.689310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.689631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.689643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.689966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.689979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.690237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.690250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.690548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.690560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.690882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.690895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.691136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.691149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.691458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.691470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.691740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.691753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.692096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.692109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.692407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.692419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.692718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.966 [2024-07-25 14:04:33.692731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.966 qpair failed and we were unable to recover it. 00:36:36.966 [2024-07-25 14:04:33.693057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.693070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.693387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.693400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.693735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.693748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.694094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.694106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.694450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.694462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.694759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.694772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.695108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.695120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.695304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.695316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.695563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.695576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.695898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.695911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.696141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.696153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.696472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.696484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.696803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.696816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.697129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.697142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.697484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.697496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.697840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.697853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.698193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.698205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.698473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.698485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.698733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.698745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.699087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.699099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.699449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.699461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.699758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.699772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.700071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.700083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.700403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.700415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.700743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.700756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.701085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.701097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.701411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.701424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.701736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.701749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.702072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.702084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.702276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.702288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.702588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.702601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.702923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.702935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.703187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.703200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.703375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.703387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.703736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.703749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.704088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.704100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.704419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.704431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.967 qpair failed and we were unable to recover it. 00:36:36.967 [2024-07-25 14:04:33.704662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.967 [2024-07-25 14:04:33.704674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.704988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.705001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.705298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.705310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.705571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.705583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.705833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.705845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.706165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.706177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.706417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.706429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.706683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.706695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.706928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.706941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.707123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.707136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.707433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.707445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.707745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.707758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.708079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.708092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.708425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.708437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.708735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.708748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.708991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.709004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.709301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.709313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.709541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.709553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.709875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.709887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.710119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.710131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.710399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.710412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.710730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.710743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.710932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.710944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.711248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.711260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.711583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.711598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.711918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.711931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.712234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.712246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.712542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.712554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.712808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.712820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.713136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.713148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.713456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.713468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.713787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.713800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.968 [2024-07-25 14:04:33.714122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.968 [2024-07-25 14:04:33.714134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.968 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.714468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.714481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.714805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.714818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.715060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.715072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.715376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.715388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.715723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.715735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.715921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.715933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.716174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.716186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.716486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.716499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.716827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.716839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.717135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.717148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.717466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.717478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.717747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.717760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.718087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.718099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.718348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.718360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.718629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.718641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.718911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.718923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.719226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.719238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.719478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.719490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.719813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.719825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.720079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.720091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.720337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.720349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.720579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.720591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.720911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.720924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.721190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.721203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.721517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.721529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.721765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.721778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.722032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.722044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.722367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.722380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.722644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.722656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.722987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.722999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.723326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.723338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.723573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.723587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.723840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.723853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.724094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.724107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.724404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.724416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.724656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.724668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.724995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.969 [2024-07-25 14:04:33.725007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.969 qpair failed and we were unable to recover it. 00:36:36.969 [2024-07-25 14:04:33.725309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.725321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.725570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.725582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.725879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.725892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.726193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.726205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.726542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.726554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.726853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.726866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.727109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.727121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.727441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.727453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.727766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.727779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.728120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.728132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.728476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.728488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.728757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.728770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.729019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.729031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.729350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.729363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.729666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.729678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.729855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.729868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.730050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.730062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.730316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.730329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.730654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.730666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.730996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.731009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.731334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.731346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.731650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.731662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.731981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.731994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.732244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.732256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.732574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.732586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.732918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.732930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.733162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.733174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.733422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.733435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.733756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.733769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.734088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.734100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.734434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.734446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.734755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.734768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.735034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.735046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.735356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.735369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.735712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.735729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.735923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.735935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.736234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.736246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.970 [2024-07-25 14:04:33.736568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.970 [2024-07-25 14:04:33.736579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.970 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.736891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.736903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.737224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.737236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.737483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.737495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.737796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.737809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.738062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.738074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.738397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.738409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.738707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.738732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.739075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.739087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.739430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.739442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.739788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.739800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.740145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.740158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.740454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.740466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.740707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.740723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.741040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.741053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.741283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.741295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.741613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.741626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.741954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.741967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.742288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.742300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.742552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.742564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.742866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.742878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.743174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.743186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.743508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.743520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.743769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.743781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.744104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.744116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.744372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.744384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.744726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.744738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.745005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.745017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.745290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.745302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.745601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.745613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.745933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.745946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.746275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.746288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.746614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.746627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.746865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.746878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.747188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.747200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.747523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.747535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.747843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.747856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-07-25 14:04:33.748106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.971 [2024-07-25 14:04:33.748120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.748439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.748452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.748644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.748656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.748973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.748986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.749284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.749296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.749612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.749624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.749875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.749888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.750137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.750149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.750468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.750480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.750711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.750726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.751038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.751050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.751242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.751254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.751563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.751575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.751894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.751907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.752229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.752242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.752547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.752559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.752828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.752841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.753160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.753173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.753487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.753499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.753735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.753747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.753931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.753943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.754202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.754214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.754462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.754475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.754771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.754784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.755023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.755035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.755355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.755367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.755675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.755687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.755928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.755940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.756239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.756251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.756571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.756583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.756857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.756869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.757194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.757206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.757477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.757489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.757819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.757836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.758065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.758078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.758321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.758333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.758659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.758671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.758921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.972 [2024-07-25 14:04:33.758934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-07-25 14:04:33.759201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.759213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.759533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.759545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.759884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.759897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.760197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.760209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.760504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.760516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.760821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.760833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.761154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.761166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.761412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.761424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.761729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.761741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.762059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.762071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.762393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.762405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.762661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.762673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.762948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.762960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.763281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.763294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.763548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.763560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.763885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.763897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.764229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.764242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.764487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.764499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.764744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.764757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.765000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.765012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.765330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.765343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.765665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.765677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.766014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.766027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.766289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.766301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.766619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.766632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.766997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.767010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.767335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.767347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.767645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.767658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.767902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.767914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.768210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.768223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.768548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.768560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.768858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.768871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.973 [2024-07-25 14:04:33.769172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.973 [2024-07-25 14:04:33.769184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.973 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.769501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.769513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.769768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.769781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.770029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.770041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.770359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.770371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.770674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.770687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.770878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.770891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.771208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.771220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.771526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.771538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.771835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.771848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.772092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.772104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.772426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.772438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.772690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.772702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.773044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.773056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.773407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.773419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.773720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.773733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.773973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.773985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.774306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.774318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.774591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.774603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.774928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.774941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.775105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.775117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.775280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.775292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.775611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.775623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.775949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.775962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.776323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.776335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.776661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.776673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.776995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.777007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.777336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.777349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.777583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.777595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.777861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.777873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.778195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.778207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.778540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.778552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.778738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.778750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.779006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.779018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.779267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.779280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.779581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.779593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.779913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.779926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.974 qpair failed and we were unable to recover it. 00:36:36.974 [2024-07-25 14:04:33.780246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.974 [2024-07-25 14:04:33.780260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.780510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.780522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.780846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.780859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.781180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.781192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.781521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.781534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.781786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.781798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.782126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.782138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.782367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.782379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.782619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.782631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.782982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.782994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.783330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.783343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.783689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.783701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.784048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.784061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.784404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.784417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.784765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.784778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.785028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.785040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.785273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.785285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.785580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.785592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.785912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.785925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.786171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.786183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.786486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.786498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.786765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.786778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.787087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.787099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.787396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.787408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.787722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.787735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.788054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.788066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.788364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.788376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.788621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.788633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.788957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.788970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.789300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.789312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.789664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.789676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.789939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.789951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.790268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.790280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.790544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.790556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.790891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.790904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.791154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.791166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.791464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.791476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.791818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.975 [2024-07-25 14:04:33.791830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.975 qpair failed and we were unable to recover it. 00:36:36.975 [2024-07-25 14:04:33.792173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.792185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.792521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.792534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.792854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.792868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.793167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.793179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.793426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.793438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.793759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.793772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.794069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.794081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.794400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.794412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.794662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.794674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.794993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.795006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.795237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.795249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.795566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.795578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.795752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.795765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.796083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.796095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.796417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.796429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.796746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.796759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.797073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.797086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.797406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.797418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.797669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.797681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.798003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.798016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.798246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.798258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.798585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.798597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.798894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.798906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.799205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.799217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.799467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.799479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.799709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.799725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.800052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.800064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.800418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.800431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.800778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.800790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.801089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.801101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.801365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.801378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.801696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.801708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.802041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.802053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.802352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.802365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.802634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.802646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.802896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.802909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.803232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.803244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.976 qpair failed and we were unable to recover it. 00:36:36.976 [2024-07-25 14:04:33.803560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.976 [2024-07-25 14:04:33.803572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.803905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.803917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.804242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.804254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.804563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.804575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.804898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.804911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.805233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.805247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.805510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.805522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.805857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.805869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.806216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.806228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.806473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.806485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.806768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.806780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.807075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.807088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.807404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.807417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.807747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.807759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.808113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.808125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.808467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.808479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.808796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.808809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.809080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.809092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.809420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.809432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.809757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.809770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.810100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.810113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.810436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.810448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.810704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.810726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.811033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.811045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.811344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.811356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.811677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.811689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.811922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.811935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.812237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.812249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.812551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.812563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.812881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.812894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.813137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.813150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.813344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.813356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.813683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.813696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.814066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.814080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.814323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.814334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.814575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.814588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.814822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.814835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.815065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.977 [2024-07-25 14:04:33.815077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.977 qpair failed and we were unable to recover it. 00:36:36.977 [2024-07-25 14:04:33.815344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.978 [2024-07-25 14:04:33.815357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.978 qpair failed and we were unable to recover it. 00:36:36.978 [2024-07-25 14:04:33.815674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.978 [2024-07-25 14:04:33.815686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.978 qpair failed and we were unable to recover it. 00:36:36.978 [2024-07-25 14:04:33.816029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.978 [2024-07-25 14:04:33.816041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.978 qpair failed and we were unable to recover it. 00:36:36.978 [2024-07-25 14:04:33.816294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.978 [2024-07-25 14:04:33.816306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.978 qpair failed and we were unable to recover it. 00:36:36.978 [2024-07-25 14:04:33.816625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.978 [2024-07-25 14:04:33.816637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.978 qpair failed and we were unable to recover it. 00:36:36.978 [2024-07-25 14:04:33.816939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.978 [2024-07-25 14:04:33.816951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.978 qpair failed and we were unable to recover it. 00:36:36.978 [2024-07-25 14:04:33.817214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.978 [2024-07-25 14:04:33.817226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.978 qpair failed and we were unable to recover it. 00:36:36.978 [2024-07-25 14:04:33.817544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.978 [2024-07-25 14:04:33.817558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.978 qpair failed and we were unable to recover it. 00:36:36.978 [2024-07-25 14:04:33.817876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.978 [2024-07-25 14:04:33.817889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.978 qpair failed and we were unable to recover it. 00:36:36.978 [2024-07-25 14:04:33.818204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.978 [2024-07-25 14:04:33.818216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.978 qpair failed and we were unable to recover it. 00:36:36.978 [2024-07-25 14:04:33.818474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.978 [2024-07-25 14:04:33.818486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.978 qpair failed and we were unable to recover it. 00:36:36.978 [2024-07-25 14:04:33.818706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.978 [2024-07-25 14:04:33.818721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.978 qpair failed and we were unable to recover it. 00:36:36.978 [2024-07-25 14:04:33.819020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.978 [2024-07-25 14:04:33.819032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.978 qpair failed and we were unable to recover it. 00:36:36.978 [2024-07-25 14:04:33.819343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.978 [2024-07-25 14:04:33.819355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.978 qpair failed and we were unable to recover it. 00:36:36.978 [2024-07-25 14:04:33.819674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.978 [2024-07-25 14:04:33.819686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.978 qpair failed and we were unable to recover it. 00:36:36.978 [2024-07-25 14:04:33.819927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.978 [2024-07-25 14:04:33.819939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.978 qpair failed and we were unable to recover it. 00:36:36.978 [2024-07-25 14:04:33.820247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.978 [2024-07-25 14:04:33.820260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.978 qpair failed and we were unable to recover it. 00:36:36.978 [2024-07-25 14:04:33.820535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.978 [2024-07-25 14:04:33.820547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.978 qpair failed and we were unable to recover it. 00:36:36.978 [2024-07-25 14:04:33.820886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.978 [2024-07-25 14:04:33.820899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.978 qpair failed and we were unable to recover it. 00:36:36.978 [2024-07-25 14:04:33.821197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.978 [2024-07-25 14:04:33.821209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.978 qpair failed and we were unable to recover it. 00:36:36.978 [2024-07-25 14:04:33.821518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.978 [2024-07-25 14:04:33.821530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.978 qpair failed and we were unable to recover it. 00:36:36.978 [2024-07-25 14:04:33.821870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.978 [2024-07-25 14:04:33.821883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.978 qpair failed and we were unable to recover it. 00:36:36.978 [2024-07-25 14:04:33.822112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.978 [2024-07-25 14:04:33.822125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.978 qpair failed and we were unable to recover it. 00:36:36.978 [2024-07-25 14:04:33.822314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.978 [2024-07-25 14:04:33.822326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:36.978 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.822576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.822590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.822892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.822905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.823243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.823255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.823573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.823585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.823904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.823917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.824251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.824263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.824511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.824523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.824769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.824781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.825078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.825090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.825406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.825419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.825697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.825709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.825956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.825968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.826295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.826307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.826479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.826492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.826836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.826849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.827214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.827227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.827555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.827567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.827889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.827901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.828146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.828158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.828478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.828490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.828801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.828814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.829134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.829147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.829456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.829468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.829769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.829783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.830082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.830094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.830327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.830339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.830640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.830653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.830891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.830903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.831226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.831238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.831414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.255 [2024-07-25 14:04:33.831426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.255 qpair failed and we were unable to recover it. 00:36:37.255 [2024-07-25 14:04:33.831656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.831668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.831983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.831995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.832243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.832255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.832571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.832583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.832831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.832844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.833091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.833103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.833356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.833368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.833621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.833633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.833957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.833970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.834283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.834295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.834613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.834625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.834946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.834959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.835203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.835215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.835525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.835537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.835852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.835864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.836164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.836176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.836486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.836499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.836824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.836837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.837068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.837080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.837393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.837406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.837659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.837671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.837916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.837928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.838252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.838264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.838561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.838573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.838894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.838906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.839204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.839216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.839448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.839460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.839786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.839799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.840029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.840041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.840354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.840366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.840661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.840673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.840948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.840961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.841279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.841291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.841657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.841671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.841919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.841932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.842249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.842261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.842488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.256 [2024-07-25 14:04:33.842500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.256 qpair failed and we were unable to recover it. 00:36:37.256 [2024-07-25 14:04:33.842733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.842745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.843092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.843104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.843445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.843457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.843802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.843815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.844059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.844071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.844406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.844419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.844735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.844747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.845081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.845094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.845390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.845402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.845729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.845742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.846064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.846077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.846320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.846333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.846653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.846666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.846985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.846997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.847308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.847321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.847640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.847652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.847978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.847990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.848347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.848360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.848704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.848733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.849032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.849044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.849363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.849375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.849631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.849643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.849965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.849977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.850298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.850310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.850630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.850642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.850954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.850966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.851286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.851298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.851630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.851642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.851885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.851897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.852146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.852158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.852454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.852466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.852729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.852742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.853062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.853074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.853408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.853421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.853725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.853738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.854060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.854072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.854313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.257 [2024-07-25 14:04:33.854327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.257 qpair failed and we were unable to recover it. 00:36:37.257 [2024-07-25 14:04:33.854632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.854644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.854966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.854978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.855296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.855308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.855648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.855660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.855915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.855928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.856248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.856260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.856518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.856530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.856854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.856866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.857170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.857182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.857503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.857515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.857787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.857800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.858063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.858075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.858393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.858405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.858713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.858729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.858969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.858981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.859300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.859312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.859625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.859637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.859978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.859990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.860255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.860267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.860586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.860598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.860851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.860864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.861187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.861200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.861519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.861531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.861903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.861916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.862213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.862225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.862480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.862492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.862812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.862825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.863160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.863172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.863401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.863413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.863732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.863745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.863974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.863986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.864328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.864340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.864617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.864629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.864874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.864887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.865233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.865246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.865592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.865604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.865832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.865844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.258 qpair failed and we were unable to recover it. 00:36:37.258 [2024-07-25 14:04:33.866146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.258 [2024-07-25 14:04:33.866159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.866455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.866467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.866713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.866730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.867050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.867062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.867301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.867313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.867651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.867663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.867938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.867951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.868202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.868214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.868453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.868465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.868784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.868796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.869119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.869132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.869388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.869401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.869728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.869741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.869991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.870003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.870185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.870197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.870517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.870529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.870888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.870901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.871195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.871208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.871545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.871557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.871825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.871838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.872069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.872082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.872310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.872323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.872556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.872568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.872746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.872759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.873003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.873015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.873243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.873256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.873571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.873584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.873768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.873781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.874102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.874114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.874372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.874384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.259 qpair failed and we were unable to recover it. 00:36:37.259 [2024-07-25 14:04:33.874672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.259 [2024-07-25 14:04:33.874684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.875035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.875048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.875347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.875359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.875681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.875693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.875994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.876007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.876236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.876248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.876567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.876579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.876846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.876858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.877167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.877179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.877369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.877382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.877704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.877760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.878006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.878018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.878340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.878354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.878622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.878634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.878906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.878919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.879221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.879233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.879476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.879489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.879741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.879754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.880070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.880082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.880403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.880415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.880720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.880732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.881056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.881068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.881386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.881398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.881662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.881675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.881920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.881932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.882253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.882265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.882494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.882506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.882775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.882787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.883084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.883096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.883345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.883357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.883675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.883687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.883940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.883953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.884276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.884288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.884531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.884544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.884802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.884815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.885081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.885093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.885324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.885336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.260 [2024-07-25 14:04:33.885669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.260 [2024-07-25 14:04:33.885681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.260 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.885988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.886000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.886272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.886284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.886586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.886598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.886895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.886908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.887102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.887114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.887355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.887367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.887705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.887720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.887982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.887994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.888307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.888319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.888637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.888649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.888951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.888963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.889282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.889294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.889617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.889629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.889963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.889975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.890222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.890236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.890478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.890490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.890732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.890744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.891066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.891080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.891329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.891341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.891585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.891597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.891919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.891932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.892255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.892267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.892507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.892520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.892856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.892869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.893217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.893229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.893577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.893591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.893935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.893948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.894294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.894308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.894660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.894673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.894925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.894938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.895269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.895281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.895603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.895615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.895926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.895939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.896218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.896231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.896587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.896599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.896943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.896956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.261 [2024-07-25 14:04:33.897301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.261 [2024-07-25 14:04:33.897315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.261 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.897542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.897555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.897878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.897890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.898211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.898223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.898590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.898603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.898904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.898941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.899197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.899216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.899553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.899571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.899850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.899867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.900155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.900171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.900495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.900512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0594000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.900784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.900798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.901134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.901147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.901457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.901470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.901791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.901804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.902114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.902128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.902386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.902400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.902700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.902712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.902993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.903008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.903278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.903290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.903538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.903550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.903873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.903886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.904188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.904201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.904505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.904518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.904708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.904724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.904955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.904968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.905296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.905308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.905515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.905527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.905773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.905786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.906104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.906116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.906441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.906453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.906776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.906788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.907099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.907111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.907378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.907391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.907674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.907686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.907861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.907874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.908150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.908162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.908484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.262 [2024-07-25 14:04:33.908497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.262 qpair failed and we were unable to recover it. 00:36:37.262 [2024-07-25 14:04:33.908828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.908841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.909098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.909111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.909400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.909413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.909741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.909754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.909986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.909998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.910297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.910311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.910566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.910579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.910895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.910908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.911220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.911232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.911570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.911582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.911947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.911959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.912156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.912170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.912488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.912501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.912823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.912836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.913140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.913152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.913397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.913409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.913736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.913748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.914010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.914022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.914210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.914223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.914480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.914492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.914747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.914761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.915002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.915014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.915268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.915280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.915594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.915606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.915924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.915937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.916185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.916198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.916487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.916500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.916768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.916781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.917088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.917100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.917377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.917390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.917656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.917668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.917909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.917922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.918218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.918230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.918555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.918567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.918889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.918902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.919177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.919189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.919533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.919545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.263 [2024-07-25 14:04:33.919784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.263 [2024-07-25 14:04:33.919796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.263 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.920029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.920041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.920343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.920355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.920673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.920685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.921012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.921025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.921286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.921298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.921547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.921560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.921820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.921832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.922072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.922084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.922330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.922343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.922583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.922598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.922866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.922878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.923177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.923190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.923525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.923539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.923813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.923826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.924081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.924094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.924286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.924298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.924635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.924649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.924895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.924907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.925152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.925165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.925428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.925441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.925685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.925698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.926000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.926013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.926337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.926349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.926593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.926606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.926880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.926893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.927237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.927250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.927597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.927610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.927951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.927965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.928176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.928189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.928467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.928480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.264 [2024-07-25 14:04:33.928820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.264 [2024-07-25 14:04:33.928833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.264 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.929097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.929109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.929372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.929385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.929683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.929695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.929997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.930010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.930357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.930369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.930704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.930720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.931066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.931078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.931354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.931366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.931685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.931697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.931954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.931967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.932288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.932300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.932575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.932587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.932912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.932925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.933224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.933236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.933511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.933524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.933871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.933883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.934234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.934247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.934557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.934569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.934889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.934904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.935152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.935164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.935412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.935424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.935674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.935686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.935995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.936007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.936258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.936271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.936592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.936605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.936927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.936940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.937261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.937273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.937609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.937622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.937926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.937939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.938238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.938251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.938555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.938567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.938901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.938913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.939235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.939248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.939581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.939595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.939937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.939950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.940262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.940275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.940519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.265 [2024-07-25 14:04:33.940531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.265 qpair failed and we were unable to recover it. 00:36:37.265 [2024-07-25 14:04:33.940851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.940864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.941139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.941152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.941492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.941505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.941753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.941766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.942110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.942122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.942472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.942484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.942743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.942756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.943015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.943028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.943301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.943313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.943654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.943666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.943965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.943978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.944206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.944218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.944458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.944470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.944668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.944681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.944873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.944886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.945205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.945218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.945493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.945505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.945602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.945614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.945959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.945973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.946271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.946284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.946471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.946483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.946797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.946811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.947130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.947143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.947462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.947474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.947719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.947731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.948056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.948069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.948389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.948402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.948560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.948572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.948749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.948761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.949009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.949022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.949252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.949265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.949563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.949575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.949818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.949831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.950060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.950073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.950414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.950427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.950707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.950732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.950977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.950990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.266 qpair failed and we were unable to recover it. 00:36:37.266 [2024-07-25 14:04:33.951235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.266 [2024-07-25 14:04:33.951248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.951510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.951522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.951820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.951833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.952130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.952142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.952421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.952433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.952526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.952538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.952862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.952875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.953121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.953134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.953381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.953393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.953660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.953672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.953850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.953863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.954143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.954156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.954454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.954466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.954717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.954730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.954927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.954939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.955203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.955216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.955479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.955491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.955736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.955749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.956058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.956070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.956339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.956352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.956666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.956679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.956910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.956923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.957176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.957188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.957508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.957521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.957749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.957765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.957941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.957953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.958220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.958232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.958482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.958494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.958746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.958759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.958917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.958929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.959104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.959116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.959413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.959425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.959739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.959751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.960091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.960103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.960294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.960306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.960601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.960613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.960921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.960933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.267 [2024-07-25 14:04:33.961199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.267 [2024-07-25 14:04:33.961211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.267 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.961533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.961546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.961842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.961855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.962148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.962160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.962503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.962515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.962771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.962784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.963028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.963041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.963360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.963373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.963613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.963626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.963857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.963870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.964192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.964204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.964386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.964398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.964694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.964706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.964981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.964993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.965301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.965314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.965564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.965577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.965953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.965966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.966160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.966172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.966493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.966506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.966837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.966850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.967170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.967183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.967513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.967525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.967765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.967778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.968101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.968114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.968439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.968451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.968754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.968767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.969028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.969041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.969340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.969355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.969603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.969615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.969860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.969873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.970194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.970206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.970452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.970464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.970721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.970734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.971031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.971043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.971235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.971248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.971490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.971503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.971769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.971781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.972112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.268 [2024-07-25 14:04:33.972125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.268 qpair failed and we were unable to recover it. 00:36:37.268 [2024-07-25 14:04:33.972287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.972300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.972618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.972630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.972902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.972915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.973226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.973238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.973487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.973500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.973757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.973770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.974013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.974025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.974272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.974284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.974532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.974544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.974867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.974879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.975150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.975162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.975508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.975520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.975768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.975780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.976031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.976044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.976315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.976328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.976582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.976594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.976849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.976862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.977119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.977131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.977430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.977442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.977742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.977754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.978020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.978032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.978272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.978284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.978523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.978535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.978854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.978867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.979134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.979146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.979441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.979453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.979682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.979695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.980018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.980031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.980328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.980341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.980612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.980626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.980884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.269 [2024-07-25 14:04:33.980896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.269 qpair failed and we were unable to recover it. 00:36:37.269 [2024-07-25 14:04:33.981153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.981165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.981439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.981452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.981700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.981712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.981971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.981983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.982268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.982280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.982597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.982609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.982929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.982942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.983264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.983276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.983463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.983476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.983706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.983722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.984061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.984073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.984376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.984389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.984638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.984651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.984914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.984927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.985175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.985187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.985506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.985519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.985819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.985832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.986082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.986094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.986361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.986373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.986693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.986705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.987030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.987043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.987389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.987401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.987511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.987523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.987767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.987780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.988104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.988116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.988367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.988379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.988658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.988670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.988995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.989008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.989188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.989201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.989513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.989526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.989844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.989857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.990036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.990048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.990305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.990318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.990640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.990652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.990896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.990908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.991139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.991151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.991471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.991484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.270 [2024-07-25 14:04:33.991673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.270 [2024-07-25 14:04:33.991685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.270 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.991876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.991890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.992125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.992138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.992432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.992445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.992639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.992652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.992893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.992905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.993155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.993167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.993414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.993426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.993721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.993734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.993998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.994010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.994245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.994257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.994506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.994518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.994817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.994830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.995077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.995090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.995406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.995419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.995612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.995624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.995945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.995958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.996207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.996219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.996447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.996459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.996717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.996730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.997029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.997041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.997362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.997375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.997626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.997639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.997870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.997882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.998118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.998130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.998405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.998417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.998576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.998588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.998914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.998926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.999157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.999169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.999432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.999444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.999766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.999778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:33.999936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:33.999948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:34.000135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:34.000147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:34.000386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:34.000399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:34.000695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:34.000707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:34.001031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:34.001044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:34.001286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:34.001298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:34.001622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.271 [2024-07-25 14:04:34.001634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.271 qpair failed and we were unable to recover it. 00:36:37.271 [2024-07-25 14:04:34.001917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.001929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.002196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.002208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.002529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.002541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.002839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.002853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.003141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.003153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.003399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.003411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.003665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.003677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.003974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.003987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.004213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.004225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.004544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.004556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.004790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.004803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.005057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.005069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.005318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.005331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.005632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.005644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.005872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.005885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.006119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.006132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.006473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.006485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.006751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.006764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.007021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.007033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.007289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.007301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.007494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.007507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.007806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.007819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.008121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.008133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.008427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.008440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.008720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.008733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.009028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.009040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.009361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.009373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.009623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.009636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.009936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.009949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.010266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.010279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.010514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.010526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.010702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.010718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.011038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.011051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.011294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.011306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.011628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.011641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.011873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.011885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.012150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.012163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.272 qpair failed and we were unable to recover it. 00:36:37.272 [2024-07-25 14:04:34.012489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.272 [2024-07-25 14:04:34.012501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.012745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.012758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.012936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.012948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.013284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.013296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.013620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.013633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.013877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.013889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.014167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.014183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.014433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.014446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.014707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.014729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.014913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.014926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.015253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.015266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.015523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.015535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.015853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.015866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.016117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.016129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.016372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.016384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.016622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.016635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.016887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.016900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.017155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.017168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.017436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.017448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.017684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.017696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.017904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.017917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.018185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.018197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.018429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.018441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.018703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.018718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.019062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.019074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.019316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.019329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.019511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.019524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.019836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.019849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.020172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.020184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.020378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.020390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.273 [2024-07-25 14:04:34.020664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.273 [2024-07-25 14:04:34.020676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.273 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.020937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.020950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.021194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.021207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.021406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.021419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.021710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.021727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.021980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.021993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.022231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.022243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.022534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.022546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.022790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.022802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.023113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.023125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.023371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.023384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.023649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.023662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.023911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.023924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.024262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.024274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.024534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.024548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.024789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.024802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.025142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.025158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.025401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.025414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.025757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.025772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.026021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.026034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.026191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.026206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.026516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.026529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.026868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.026881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.027124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.027136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.027383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.027395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.027741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.027754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.027942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.027954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.028226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.028238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.028508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.028521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.028782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.028796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.029115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.029127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.029450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.029462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.029631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.029643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.029821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.029834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.030148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.030161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.030389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.030402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.030655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.030667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.274 qpair failed and we were unable to recover it. 00:36:37.274 [2024-07-25 14:04:34.030900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.274 [2024-07-25 14:04:34.030912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.031154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.031167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.031415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.031428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.031671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.031684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.031942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.031956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.032154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.032167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.032518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.032530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.032833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.032846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.033147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.033159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.033355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.033368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.033688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.033700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.034003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.034015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.034244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.034257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.034584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.034596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.034777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.034790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.035024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.035037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.035285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.035298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.035550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.035562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.035808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.035821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.036119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.036134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.036386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.036398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.036555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.036567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.036912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.036925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.037173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.037185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.037359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.037372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.037668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.037681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.038000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.038013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.038335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.038348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.038590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.038602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.038905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.038919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.039237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.039249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.039499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.039512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.039757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.039770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.039968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.039981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.040224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.040237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.040503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.040516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.040840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.040852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.275 [2024-07-25 14:04:34.041171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.275 [2024-07-25 14:04:34.041184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.275 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.041416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.041428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.041670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.041682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.041913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.041926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.042199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.042212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.042460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.042472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.042776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.042789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.042966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.042978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.043234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.043247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.043483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.043496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.043759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.043773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.044044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.044057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.044216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.044229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.044502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.044515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.044757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.044770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.044998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.045012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.045309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.045322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.045643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.045656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.045904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.045918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.046179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.046192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.046384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.046397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.046724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.046737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.046983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.046997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.047297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.047310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.047606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.047618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.047938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.047951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.048128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.048141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.048393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.048406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.048664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.048677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.048954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.048967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.049215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.049228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.049471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.049485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.049804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.049817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.050137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.050150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.050393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.050406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.050663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.050675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.050906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.050919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.051215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.276 [2024-07-25 14:04:34.051228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.276 qpair failed and we were unable to recover it. 00:36:37.276 [2024-07-25 14:04:34.051486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.051499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.051769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.051782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.052025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.052038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.052356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.052369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.052611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.052624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.052802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.052815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.053133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.053146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.053469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.053482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.053710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.053733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.053890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.053903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.054161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.054173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.054422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.054435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.054596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.054609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.054774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.054787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.054971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.054985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.055281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.055294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.055609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.055622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.055964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.055977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.056221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.056234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.056551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.056564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.056745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.056758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.057076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.057089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.057258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.057271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.057457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.057469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.057707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.057724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.058046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.058059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.058381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.058394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.058667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.058679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.058998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.059012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.059259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.059271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.059613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.059625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.059896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.059910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.060157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.060170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.060469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.060481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.060800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.060813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.061139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.061152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.061394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.061407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.277 qpair failed and we were unable to recover it. 00:36:37.277 [2024-07-25 14:04:34.061653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.277 [2024-07-25 14:04:34.061666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.061860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.061873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.062044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.062056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.062225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.062237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.062557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.062569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.062810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.062824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.063158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.063170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.063412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.063425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.063664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.063677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.063942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.063955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.064193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.064206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.064527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.064540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.064798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.064811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.065043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.065055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.065330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.065345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.065642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.065654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.065972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.065985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.066272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.066284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.066529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.066542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.066809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.066822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.067087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.067099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.067416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.067430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.067610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.067622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.067942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.067955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.068196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.068208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.068365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.068378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.068496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.068509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.068754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.068768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.069028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.069040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.069359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.069372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.069678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.069692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.069999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.070012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.278 qpair failed and we were unable to recover it. 00:36:37.278 [2024-07-25 14:04:34.070205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.278 [2024-07-25 14:04:34.070218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.070460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.070473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.070790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.070803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.071125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.071138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.071442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.071455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.071632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.071645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.071940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.071953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.072294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.072307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.072501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.072513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.072858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.072872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.073178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.073190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.073431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.073444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.073751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.073764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.074006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.074019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.074265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.074278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.074573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.074586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.074829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.074843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.075139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.075152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.075400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.075414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.075676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.075689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.075935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.075948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.076266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.076278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.076540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.076555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.076795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.076809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.077105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.077118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.077364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.077377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.077675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.077687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.077946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.077959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.078224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.078236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.078478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.078490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.078810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.078824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.079166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.079179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.079432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.079445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.079766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.079780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.080039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.080051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.080280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.080293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.080475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.080488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.279 qpair failed and we were unable to recover it. 00:36:37.279 [2024-07-25 14:04:34.080800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.279 [2024-07-25 14:04:34.080813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.081063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.081076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.081269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.081282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.081624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.081636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.081811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.081824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.082076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.082088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.082368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.082380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.082634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.082646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.082894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.082906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.083223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.083235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.083562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.083574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.083897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.083910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.084099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.084111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.084340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.084352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.084602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.084614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.084865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.084878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.085203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.085215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.085534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.085546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.085883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.085895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.086191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.086203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.086452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.086464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.086640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.086653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.086903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.086916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.087156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.087168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.087466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.087478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.087799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.087814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.088063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.088075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.088394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.088407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.088659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.088671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.088973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.088986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.089256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.089269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.089586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.089598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.089913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.089925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.090192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.090204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.090522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.090535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.090842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.090855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.091157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.091169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.280 [2024-07-25 14:04:34.091485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.280 [2024-07-25 14:04:34.091497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.280 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.091793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.091805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.092128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.092141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.092384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.092396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.092657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.092669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.092989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.093002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.093364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.093376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.093700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.093712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.094038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.094051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.094350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.094362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.094611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.094623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.094875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.094888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.095155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.095167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.095477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.095490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.095740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.095753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.096079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.096092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.096337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.096349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.096575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.096587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.096843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.096855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.097179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.097191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.097422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.097434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.097770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.097782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.098045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.098057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.098371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.098383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.098700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.098712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.099037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.099049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.099416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.099428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.099678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.099689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.100014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.100028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.100273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.100286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.100586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.100599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.100776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.100788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.101109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.101122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.101397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.101409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.101710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.101726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.102046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.102059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.102355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.102368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.102672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.281 [2024-07-25 14:04:34.102684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.281 qpair failed and we were unable to recover it. 00:36:37.281 [2024-07-25 14:04:34.102957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.102969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.103287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.103300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.103547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.103559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.103879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.103892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.104214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.104226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.104492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.104505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.104840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.104853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.105103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.105115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.105446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.105458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.105633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.105646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.105957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.105969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.106295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.106307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.106488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.106501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.106768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.106781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.107125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.107137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.107483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.107496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.107796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.107808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.108055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.108068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.108311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.108323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.108643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.108656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.108976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.108989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.109321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.109333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.109655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.109667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.109939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.109952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.110277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.110289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.110591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.110604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.110902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.110915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.111187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.111199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.111495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.111507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.111829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.111841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.112096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.112110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.112372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.112385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.112703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.112719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.113040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.113052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.113325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.113337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.113589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.113601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 [2024-07-25 14:04:34.113849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.282 [2024-07-25 14:04:34.113862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.282 qpair failed and we were unable to recover it. 00:36:37.282 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:37.282 [2024-07-25 14:04:34.114130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.114143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:36:37.283 [2024-07-25 14:04:34.114376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.114388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:37.283 [2024-07-25 14:04:34.114615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.114628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:37.283 [2024-07-25 14:04:34.114887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.114900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:37.283 [2024-07-25 14:04:34.115221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.115233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 [2024-07-25 14:04:34.115478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.115490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 [2024-07-25 14:04:34.115732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.115744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 [2024-07-25 14:04:34.116066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.116079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 [2024-07-25 14:04:34.116327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.116339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 [2024-07-25 14:04:34.116638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.116650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 [2024-07-25 14:04:34.116972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.116985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 [2024-07-25 14:04:34.117305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.117317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 [2024-07-25 14:04:34.117625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.117638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 [2024-07-25 14:04:34.117982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.117995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 [2024-07-25 14:04:34.118241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.118254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 [2024-07-25 14:04:34.118551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.118566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 [2024-07-25 14:04:34.118764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.118777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 [2024-07-25 14:04:34.118952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.118964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 [2024-07-25 14:04:34.119279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.119292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 [2024-07-25 14:04:34.119541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.119554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 [2024-07-25 14:04:34.119833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.119846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 [2024-07-25 14:04:34.120130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.120142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 [2024-07-25 14:04:34.120500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.120514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 [2024-07-25 14:04:34.120841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.120853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 [2024-07-25 14:04:34.121030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.121042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 [2024-07-25 14:04:34.121340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.121353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 [2024-07-25 14:04:34.121586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.121598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 [2024-07-25 14:04:34.121853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.121866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 [2024-07-25 14:04:34.122188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.122200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.283 [2024-07-25 14:04:34.122453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.283 [2024-07-25 14:04:34.122466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.283 qpair failed and we were unable to recover it. 00:36:37.284 [2024-07-25 14:04:34.122805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.284 [2024-07-25 14:04:34.122817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.284 qpair failed and we were unable to recover it. 00:36:37.284 [2024-07-25 14:04:34.123149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.284 [2024-07-25 14:04:34.123164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.284 qpair failed and we were unable to recover it. 00:36:37.284 [2024-07-25 14:04:34.123509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.284 [2024-07-25 14:04:34.123522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.284 qpair failed and we were unable to recover it. 00:36:37.284 [2024-07-25 14:04:34.123767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.284 [2024-07-25 14:04:34.123780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.284 qpair failed and we were unable to recover it. 00:36:37.284 [2024-07-25 14:04:34.123962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.284 [2024-07-25 14:04:34.123975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.284 qpair failed and we were unable to recover it. 00:36:37.284 [2024-07-25 14:04:34.124163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.284 [2024-07-25 14:04:34.124176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.284 qpair failed and we were unable to recover it. 00:36:37.284 [2024-07-25 14:04:34.124404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.284 [2024-07-25 14:04:34.124417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.284 qpair failed and we were unable to recover it. 00:36:37.284 [2024-07-25 14:04:34.124735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.284 [2024-07-25 14:04:34.124748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.284 qpair failed and we were unable to recover it. 00:36:37.284 [2024-07-25 14:04:34.124976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.284 [2024-07-25 14:04:34.124989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.284 qpair failed and we were unable to recover it. 00:36:37.284 [2024-07-25 14:04:34.125311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.284 [2024-07-25 14:04:34.125324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.284 qpair failed and we were unable to recover it. 00:36:37.284 [2024-07-25 14:04:34.125499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.284 [2024-07-25 14:04:34.125511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.284 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.125826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.125840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.126102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.126116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.126425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.126437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.126781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.126794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.127200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.127213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.127405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.127420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.127685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.127698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.129501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.129527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.129875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.129889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.130102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.130115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.130290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.130302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.130604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.130616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.130916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.130928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.131129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.131142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.131487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.131499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.131823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.131836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.132215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.132229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.132498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.132510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.132833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.132847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.133043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.133055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.133289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.133301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.133624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.133636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.133957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.133969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.134288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.134300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.134568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.134580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.134877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.134890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.135187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.135200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.135526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.135539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.135800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.135813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.136069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.136081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.136327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.549 [2024-07-25 14:04:34.136342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.549 qpair failed and we were unable to recover it. 00:36:37.549 [2024-07-25 14:04:34.136704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.136721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.137035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.137047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.137367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.137380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.137666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.137679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.137963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.137976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.138252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.138265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.138465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.138477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.138738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.138751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.139028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.139041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.139332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.139345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.139640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.139653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.139847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.139860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.140159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.140171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.140422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.140435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.140699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.140711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.140939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.140951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.141133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.141146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.141376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.141388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.141568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.141580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.141886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.141899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.142212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.142225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.142532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.142544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.142889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.142902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.143226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.143238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.143527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.143539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.143749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.143762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.143965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.143977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.144255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.144267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.144653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.144665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.145020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.145033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.145303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.145316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.145682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.145696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.145939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.145952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.146133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.146146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.146382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.146395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.550 [2024-07-25 14:04:34.146660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.550 [2024-07-25 14:04:34.146673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.550 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.146924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.146939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.147184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.147197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.147517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.147529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.147830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.147845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.148075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.148087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.148384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.148397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.148587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.148600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.148926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.148939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.149257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.149270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.149470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.149482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.149729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.149741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.149955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.149968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.150147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.150159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.150402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.150414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.150709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.150726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.151023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.151036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.151279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.151291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.151591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.151603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.151877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.151890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.152120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.152133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.152457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.152469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.152776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.152789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.152995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.153007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.153205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.153217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.153474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.153486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.153806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.153819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.154005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.154018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.154223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.154236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.154440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.154454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.154702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.154727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.154986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.154999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.155300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.155314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.155509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.155521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.155754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.155769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.156021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.156033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.156224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.156236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.551 [2024-07-25 14:04:34.156529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.551 [2024-07-25 14:04:34.156542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.551 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.156793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.156806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.157048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.157061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.157247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.157260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.157630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.157643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.157905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.157917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.158180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.158192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.158372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.158387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.158706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.158723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.158982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.158994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.159241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.159254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.159519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.159531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.159733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.159746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.159998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.160011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:37.552 [2024-07-25 14:04:34.160264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.160278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.160532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.160546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:37.552 [2024-07-25 14:04:34.160848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.160862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.552 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:37.552 [2024-07-25 14:04:34.161178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.161192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.161455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.161468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.161806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.161818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.162071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.162083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.162324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.162337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.162663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.162675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.162934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.162947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.163243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.163255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.163513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.163526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.163823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.163836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.164085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.164097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.164349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.164361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.164684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.164697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.164934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.164946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.165256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.165269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.165643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.165655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.165986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.165999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.166194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.552 [2024-07-25 14:04:34.166207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.552 qpair failed and we were unable to recover it. 00:36:37.552 [2024-07-25 14:04:34.166431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.166443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.166763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.166776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.167078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.167091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.167297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.167309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.167551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.167563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.167892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.167905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.168223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.168236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.168569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.168582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.168886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.168899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.169169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.169182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.169482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.169496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.169822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.169835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.170062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.170075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.170403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.170416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.170732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.170745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.171065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.171078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.171332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.171345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.171668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.171681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.171910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.171923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.172240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.172253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.172576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.172589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.172909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.172921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.173254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.173268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.173574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.173588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.173968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.173982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.174167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.174180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.174514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.174527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.174869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.174885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.175160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.175175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.175497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.175512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.553 [2024-07-25 14:04:34.175837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.553 [2024-07-25 14:04:34.175853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.553 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.176152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.176165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.176464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.176477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.176823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.176836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.177109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.177121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.177444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.177457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.177711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.177726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.177997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.178010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.178309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.178322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.178571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.178583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 Malloc0 00:36:37.554 [2024-07-25 14:04:34.178927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.178940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.179252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.179265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.554 [2024-07-25 14:04:34.179537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.179550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.179800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.179812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:37.554 [2024-07-25 14:04:34.180054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.180067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.554 [2024-07-25 14:04:34.180378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.180391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:37.554 [2024-07-25 14:04:34.180707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.180724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.180912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.180925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.181118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.181130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.181334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.181346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.181687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.181699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.182032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.182046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.182344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.182357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.182681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.182693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.182919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.182932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.183250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.183263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.183527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.183539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.183873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.183885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.184133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.184146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.184394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.184406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.184724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.184736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.184964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.184976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.185297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.185309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.185634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.185647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.185965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.185977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.554 [2024-07-25 14:04:34.186239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.554 [2024-07-25 14:04:34.186238] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:37.554 [2024-07-25 14:04:34.186252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.554 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.186545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.186558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.186855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.186868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.187045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.187058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.187286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.187298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.187541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.187554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.187877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.187890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.188160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.188172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.188501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.188513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.188839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.188852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.189105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.189117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.189454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.189466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.189766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.189779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.190031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.190043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.190224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.190236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.190495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.190507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.190831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.190843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.191140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.191152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.191457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.191470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.191790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.191802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.192060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.192073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.192327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.192339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.192594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.192606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.192912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.192926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.193170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.193182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.193410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.193422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.193654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.193667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.194010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.194022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.194273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.194285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.194631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.194644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.194943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.194956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.555 [2024-07-25 14:04:34.195278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.195290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:37.555 [2024-07-25 14:04:34.195566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.195578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.555 [2024-07-25 14:04:34.195879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.195892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:37.555 [2024-07-25 14:04:34.196144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.196156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.196402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.555 [2024-07-25 14:04:34.196415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.555 qpair failed and we were unable to recover it. 00:36:37.555 [2024-07-25 14:04:34.196654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.196667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.196922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.196934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.197254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.197266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.197585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.197597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.197827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.197840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.198159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.198171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.198419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.198431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.198732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.198745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.199066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.199078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.199333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.199345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.199614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.199626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.199931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.199944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.200263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.200275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.200640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.200652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.200984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.200997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.201242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.201254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.201523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.201535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.201857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.201870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.202129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.202141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.202388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.202401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.202639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.202651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.202897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.202909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.556 [2024-07-25 14:04:34.203207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.203220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:37.556 [2024-07-25 14:04:34.203533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.203546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.556 [2024-07-25 14:04:34.203775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.203789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:37.556 [2024-07-25 14:04:34.204092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.204104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.204426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.204438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.204736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.204749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.205049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.205061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.205318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.205330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.205661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.205673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.205976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.205988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.206310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.206322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.206548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.206561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.206885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.206898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.207157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.556 [2024-07-25 14:04:34.207170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.556 qpair failed and we were unable to recover it. 00:36:37.556 [2024-07-25 14:04:34.207504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.557 [2024-07-25 14:04:34.207517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.557 qpair failed and we were unable to recover it. 00:36:37.557 [2024-07-25 14:04:34.207761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.557 [2024-07-25 14:04:34.207774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.557 qpair failed and we were unable to recover it. 00:36:37.557 [2024-07-25 14:04:34.208024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.557 [2024-07-25 14:04:34.208036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.557 qpair failed and we were unable to recover it. 00:36:37.557 [2024-07-25 14:04:34.208345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.557 [2024-07-25 14:04:34.208357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.557 qpair failed and we were unable to recover it. 00:36:37.557 [2024-07-25 14:04:34.208685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.557 [2024-07-25 14:04:34.208697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.557 qpair failed and we were unable to recover it. 00:36:37.557 [2024-07-25 14:04:34.208999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.557 [2024-07-25 14:04:34.209011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.557 qpair failed and we were unable to recover it. 00:36:37.557 [2024-07-25 14:04:34.209308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.557 [2024-07-25 14:04:34.209321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.557 qpair failed and we were unable to recover it. 00:36:37.557 [2024-07-25 14:04:34.209554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.557 [2024-07-25 14:04:34.209566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.557 qpair failed and we were unable to recover it. 00:36:37.557 [2024-07-25 14:04:34.209872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.557 [2024-07-25 14:04:34.209885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.557 qpair failed and we were unable to recover it. 00:36:37.557 [2024-07-25 14:04:34.210119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.557 [2024-07-25 14:04:34.210131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.557 qpair failed and we were unable to recover it. 00:36:37.557 [2024-07-25 14:04:34.210379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.557 [2024-07-25 14:04:34.210391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.557 qpair failed and we were unable to recover it. 00:36:37.557 [2024-07-25 14:04:34.210688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.557 [2024-07-25 14:04:34.210700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.557 qpair failed and we were unable to recover it. 00:36:37.557 [2024-07-25 14:04:34.210971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.557 [2024-07-25 14:04:34.210984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.557 qpair failed and we were unable to recover it. 00:36:37.557 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.557 [2024-07-25 14:04:34.211304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.557 [2024-07-25 14:04:34.211316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.557 qpair failed and we were unable to recover it. 00:36:37.557 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:37.557 [2024-07-25 14:04:34.211644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.557 [2024-07-25 14:04:34.211658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.557 qpair failed and we were unable to recover it. 00:36:37.557 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.557 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:37.557 [2024-07-25 14:04:34.212014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.557 [2024-07-25 14:04:34.212030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.557 qpair failed and we were unable to recover it. 00:36:37.557 [2024-07-25 14:04:34.212332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.557 [2024-07-25 14:04:34.212345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.557 qpair failed and we were unable to recover it. 00:36:37.557 [2024-07-25 14:04:34.212671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.557 [2024-07-25 14:04:34.212684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.557 qpair failed and we were unable to recover it. 00:36:37.557 [2024-07-25 14:04:34.212993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.557 [2024-07-25 14:04:34.213006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.557 qpair failed and we were unable to recover it. 00:36:37.557 [2024-07-25 14:04:34.213180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.557 [2024-07-25 14:04:34.213193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.557 qpair failed and we were unable to recover it. 00:36:37.557 [2024-07-25 14:04:34.213523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.557 [2024-07-25 14:04:34.213535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.557 qpair failed and we were unable to recover it. 00:36:37.557 [2024-07-25 14:04:34.213787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.557 [2024-07-25 14:04:34.213801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.557 qpair failed and we were unable to recover it. 00:36:37.557 [2024-07-25 14:04:34.214122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.557 [2024-07-25 14:04:34.214136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.557 qpair failed and we were unable to recover it. 00:36:37.557 [2024-07-25 14:04:34.214366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.557 [2024-07-25 14:04:34.214378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f059c000b90 with addr=10.0.0.2, port=4420 00:36:37.557 qpair failed and we were unable to recover it. 00:36:37.557 [2024-07-25 14:04:34.214480] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:37.557 [2024-07-25 14:04:34.216844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.557 [2024-07-25 14:04:34.216941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.557 [2024-07-25 14:04:34.216963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.557 [2024-07-25 14:04:34.216979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.557 [2024-07-25 14:04:34.216989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.557 [2024-07-25 14:04:34.217014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.557 qpair failed and we were unable to recover it. 00:36:37.557 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.557 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:37.557 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.557 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:37.557 [2024-07-25 14:04:34.226794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.557 [2024-07-25 14:04:34.226885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.557 [2024-07-25 14:04:34.226904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.557 [2024-07-25 14:04:34.226914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.557 [2024-07-25 14:04:34.226922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.557 [2024-07-25 14:04:34.226942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.557 qpair failed and we were unable to recover it. 00:36:37.557 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.557 14:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 515889 00:36:37.557 [2024-07-25 14:04:34.236832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.557 [2024-07-25 14:04:34.236936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.557 [2024-07-25 14:04:34.236955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.557 [2024-07-25 14:04:34.236964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.557 [2024-07-25 14:04:34.236973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.558 [2024-07-25 14:04:34.236992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.558 qpair failed and we were unable to recover it. 00:36:37.558 [2024-07-25 14:04:34.246804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.558 [2024-07-25 14:04:34.246889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.558 [2024-07-25 14:04:34.246908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.558 [2024-07-25 14:04:34.246917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.558 [2024-07-25 14:04:34.246925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.558 [2024-07-25 14:04:34.246944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.558 qpair failed and we were unable to recover it. 00:36:37.558 [2024-07-25 14:04:34.256807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.558 [2024-07-25 14:04:34.256894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.558 [2024-07-25 14:04:34.256914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.558 [2024-07-25 14:04:34.256923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.558 [2024-07-25 14:04:34.256932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.558 [2024-07-25 14:04:34.256951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.558 qpair failed and we were unable to recover it. 00:36:37.558 [2024-07-25 14:04:34.266830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.558 [2024-07-25 14:04:34.266916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.558 [2024-07-25 14:04:34.266934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.558 [2024-07-25 14:04:34.266943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.558 [2024-07-25 14:04:34.266952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.558 [2024-07-25 14:04:34.266970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.558 qpair failed and we were unable to recover it. 00:36:37.558 [2024-07-25 14:04:34.276880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.558 [2024-07-25 14:04:34.276964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.558 [2024-07-25 14:04:34.276982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.558 [2024-07-25 14:04:34.276991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.558 [2024-07-25 14:04:34.277000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.558 [2024-07-25 14:04:34.277018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.558 qpair failed and we were unable to recover it. 00:36:37.558 [2024-07-25 14:04:34.286833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.558 [2024-07-25 14:04:34.286933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.558 [2024-07-25 14:04:34.286951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.558 [2024-07-25 14:04:34.286960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.558 [2024-07-25 14:04:34.286969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.558 [2024-07-25 14:04:34.286987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.558 qpair failed and we were unable to recover it. 00:36:37.558 [2024-07-25 14:04:34.296827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.558 [2024-07-25 14:04:34.296909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.558 [2024-07-25 14:04:34.296927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.558 [2024-07-25 14:04:34.296938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.558 [2024-07-25 14:04:34.296946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.558 [2024-07-25 14:04:34.296965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.558 qpair failed and we were unable to recover it. 00:36:37.558 [2024-07-25 14:04:34.306892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.558 [2024-07-25 14:04:34.306973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.558 [2024-07-25 14:04:34.306992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.558 [2024-07-25 14:04:34.307002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.558 [2024-07-25 14:04:34.307011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.558 [2024-07-25 14:04:34.307029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.558 qpair failed and we were unable to recover it. 00:36:37.558 [2024-07-25 14:04:34.316923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.558 [2024-07-25 14:04:34.317042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.558 [2024-07-25 14:04:34.317060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.558 [2024-07-25 14:04:34.317070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.558 [2024-07-25 14:04:34.317078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.558 [2024-07-25 14:04:34.317097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.558 qpair failed and we were unable to recover it. 00:36:37.558 [2024-07-25 14:04:34.326905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.558 [2024-07-25 14:04:34.327070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.558 [2024-07-25 14:04:34.327089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.558 [2024-07-25 14:04:34.327098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.558 [2024-07-25 14:04:34.327107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.558 [2024-07-25 14:04:34.327126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.558 qpair failed and we were unable to recover it. 00:36:37.558 [2024-07-25 14:04:34.337018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.558 [2024-07-25 14:04:34.337105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.558 [2024-07-25 14:04:34.337123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.558 [2024-07-25 14:04:34.337132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.558 [2024-07-25 14:04:34.337140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.558 [2024-07-25 14:04:34.337159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.558 qpair failed and we were unable to recover it. 00:36:37.558 [2024-07-25 14:04:34.347074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.558 [2024-07-25 14:04:34.347154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.558 [2024-07-25 14:04:34.347171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.558 [2024-07-25 14:04:34.347181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.558 [2024-07-25 14:04:34.347189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.558 [2024-07-25 14:04:34.347207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.558 qpair failed and we were unable to recover it. 00:36:37.558 [2024-07-25 14:04:34.357057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.559 [2024-07-25 14:04:34.357138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.559 [2024-07-25 14:04:34.357155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.559 [2024-07-25 14:04:34.357164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.559 [2024-07-25 14:04:34.357173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.559 [2024-07-25 14:04:34.357191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.559 qpair failed and we were unable to recover it. 00:36:37.559 [2024-07-25 14:04:34.367012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.559 [2024-07-25 14:04:34.367096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.559 [2024-07-25 14:04:34.367113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.559 [2024-07-25 14:04:34.367122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.559 [2024-07-25 14:04:34.367131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.559 [2024-07-25 14:04:34.367149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.559 qpair failed and we were unable to recover it. 00:36:37.559 [2024-07-25 14:04:34.377126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.559 [2024-07-25 14:04:34.377204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.559 [2024-07-25 14:04:34.377221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.559 [2024-07-25 14:04:34.377230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.559 [2024-07-25 14:04:34.377239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.559 [2024-07-25 14:04:34.377257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.559 qpair failed and we were unable to recover it. 00:36:37.559 [2024-07-25 14:04:34.387176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.559 [2024-07-25 14:04:34.387256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.559 [2024-07-25 14:04:34.387276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.559 [2024-07-25 14:04:34.387285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.559 [2024-07-25 14:04:34.387293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.559 [2024-07-25 14:04:34.387311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.559 qpair failed and we were unable to recover it. 00:36:37.559 [2024-07-25 14:04:34.397148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.559 [2024-07-25 14:04:34.397228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.559 [2024-07-25 14:04:34.397244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.559 [2024-07-25 14:04:34.397254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.559 [2024-07-25 14:04:34.397262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.559 [2024-07-25 14:04:34.397280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.559 qpair failed and we were unable to recover it. 00:36:37.559 [2024-07-25 14:04:34.407224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.559 [2024-07-25 14:04:34.407308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.559 [2024-07-25 14:04:34.407324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.559 [2024-07-25 14:04:34.407333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.559 [2024-07-25 14:04:34.407341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.559 [2024-07-25 14:04:34.407360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.559 qpair failed and we were unable to recover it. 00:36:37.559 [2024-07-25 14:04:34.417215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.559 [2024-07-25 14:04:34.417302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.559 [2024-07-25 14:04:34.417319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.559 [2024-07-25 14:04:34.417329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.559 [2024-07-25 14:04:34.417337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.559 [2024-07-25 14:04:34.417355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.559 qpair failed and we were unable to recover it. 00:36:37.559 [2024-07-25 14:04:34.427261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.559 [2024-07-25 14:04:34.427344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.559 [2024-07-25 14:04:34.427361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.559 [2024-07-25 14:04:34.427370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.559 [2024-07-25 14:04:34.427379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.559 [2024-07-25 14:04:34.427399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.559 qpair failed and we were unable to recover it. 00:36:37.819 [2024-07-25 14:04:34.437343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.819 [2024-07-25 14:04:34.437442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.819 [2024-07-25 14:04:34.437459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.819 [2024-07-25 14:04:34.437468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.819 [2024-07-25 14:04:34.437477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.819 [2024-07-25 14:04:34.437495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.819 qpair failed and we were unable to recover it. 00:36:37.819 [2024-07-25 14:04:34.447335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.819 [2024-07-25 14:04:34.447417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.819 [2024-07-25 14:04:34.447434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.819 [2024-07-25 14:04:34.447443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.819 [2024-07-25 14:04:34.447451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.819 [2024-07-25 14:04:34.447469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.819 qpair failed and we were unable to recover it. 00:36:37.819 [2024-07-25 14:04:34.457408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.819 [2024-07-25 14:04:34.457493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.819 [2024-07-25 14:04:34.457510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.819 [2024-07-25 14:04:34.457519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.819 [2024-07-25 14:04:34.457527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.819 [2024-07-25 14:04:34.457546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.819 qpair failed and we were unable to recover it. 00:36:37.819 [2024-07-25 14:04:34.467419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.819 [2024-07-25 14:04:34.467503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.819 [2024-07-25 14:04:34.467520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.819 [2024-07-25 14:04:34.467529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.819 [2024-07-25 14:04:34.467538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.819 [2024-07-25 14:04:34.467556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.819 qpair failed and we were unable to recover it. 00:36:37.819 [2024-07-25 14:04:34.477428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.819 [2024-07-25 14:04:34.477511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.819 [2024-07-25 14:04:34.477532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.819 [2024-07-25 14:04:34.477541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.819 [2024-07-25 14:04:34.477549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.819 [2024-07-25 14:04:34.477567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.819 qpair failed and we were unable to recover it. 00:36:37.819 [2024-07-25 14:04:34.487348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.819 [2024-07-25 14:04:34.487431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.819 [2024-07-25 14:04:34.487448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.819 [2024-07-25 14:04:34.487457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.820 [2024-07-25 14:04:34.487465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.820 [2024-07-25 14:04:34.487483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.820 qpair failed and we were unable to recover it. 00:36:37.820 [2024-07-25 14:04:34.497443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.820 [2024-07-25 14:04:34.497529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.820 [2024-07-25 14:04:34.497546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.820 [2024-07-25 14:04:34.497555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.820 [2024-07-25 14:04:34.497563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.820 [2024-07-25 14:04:34.497582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.820 qpair failed and we were unable to recover it. 00:36:37.820 [2024-07-25 14:04:34.507512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.820 [2024-07-25 14:04:34.507595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.820 [2024-07-25 14:04:34.507612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.820 [2024-07-25 14:04:34.507621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.820 [2024-07-25 14:04:34.507629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.820 [2024-07-25 14:04:34.507647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.820 qpair failed and we were unable to recover it. 00:36:37.820 [2024-07-25 14:04:34.517502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.820 [2024-07-25 14:04:34.517630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.820 [2024-07-25 14:04:34.517648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.820 [2024-07-25 14:04:34.517657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.820 [2024-07-25 14:04:34.517669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.820 [2024-07-25 14:04:34.517687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.820 qpair failed and we were unable to recover it. 00:36:37.820 [2024-07-25 14:04:34.527515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.820 [2024-07-25 14:04:34.527600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.820 [2024-07-25 14:04:34.527617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.820 [2024-07-25 14:04:34.527626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.820 [2024-07-25 14:04:34.527635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.820 [2024-07-25 14:04:34.527653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.820 qpair failed and we were unable to recover it. 00:36:37.820 [2024-07-25 14:04:34.537542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.820 [2024-07-25 14:04:34.537623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.820 [2024-07-25 14:04:34.537640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.820 [2024-07-25 14:04:34.537650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.820 [2024-07-25 14:04:34.537658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.820 [2024-07-25 14:04:34.537676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.820 qpair failed and we were unable to recover it. 00:36:37.820 [2024-07-25 14:04:34.547584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.820 [2024-07-25 14:04:34.547665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.820 [2024-07-25 14:04:34.547683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.820 [2024-07-25 14:04:34.547692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.820 [2024-07-25 14:04:34.547700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.820 [2024-07-25 14:04:34.547722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.820 qpair failed and we were unable to recover it. 00:36:37.820 [2024-07-25 14:04:34.557607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.820 [2024-07-25 14:04:34.557694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.820 [2024-07-25 14:04:34.557711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.820 [2024-07-25 14:04:34.557725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.820 [2024-07-25 14:04:34.557734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.820 [2024-07-25 14:04:34.557752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.820 qpair failed and we were unable to recover it. 00:36:37.820 [2024-07-25 14:04:34.567570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.820 [2024-07-25 14:04:34.567659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.820 [2024-07-25 14:04:34.567676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.820 [2024-07-25 14:04:34.567685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.820 [2024-07-25 14:04:34.567693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.820 [2024-07-25 14:04:34.567712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.820 qpair failed and we were unable to recover it. 00:36:37.820 [2024-07-25 14:04:34.577637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.820 [2024-07-25 14:04:34.577732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.820 [2024-07-25 14:04:34.577749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.820 [2024-07-25 14:04:34.577758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.820 [2024-07-25 14:04:34.577767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.820 [2024-07-25 14:04:34.577785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.820 qpair failed and we were unable to recover it. 00:36:37.820 [2024-07-25 14:04:34.587696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.820 [2024-07-25 14:04:34.587781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.820 [2024-07-25 14:04:34.587798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.820 [2024-07-25 14:04:34.587807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.820 [2024-07-25 14:04:34.587816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.820 [2024-07-25 14:04:34.587834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.820 qpair failed and we were unable to recover it. 00:36:37.820 [2024-07-25 14:04:34.597726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.820 [2024-07-25 14:04:34.597805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.820 [2024-07-25 14:04:34.597822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.820 [2024-07-25 14:04:34.597831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.820 [2024-07-25 14:04:34.597839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.820 [2024-07-25 14:04:34.597857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.820 qpair failed and we were unable to recover it. 00:36:37.820 [2024-07-25 14:04:34.607742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.820 [2024-07-25 14:04:34.607826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.820 [2024-07-25 14:04:34.607843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.820 [2024-07-25 14:04:34.607852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.820 [2024-07-25 14:04:34.607866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.820 [2024-07-25 14:04:34.607885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.820 qpair failed and we were unable to recover it. 00:36:37.820 [2024-07-25 14:04:34.617765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.820 [2024-07-25 14:04:34.617842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.820 [2024-07-25 14:04:34.617859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.820 [2024-07-25 14:04:34.617869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.821 [2024-07-25 14:04:34.617877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.821 [2024-07-25 14:04:34.617896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.821 qpair failed and we were unable to recover it. 00:36:37.821 [2024-07-25 14:04:34.627792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.821 [2024-07-25 14:04:34.627872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.821 [2024-07-25 14:04:34.627889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.821 [2024-07-25 14:04:34.627898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.821 [2024-07-25 14:04:34.627906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.821 [2024-07-25 14:04:34.627925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.821 qpair failed and we were unable to recover it. 00:36:37.821 [2024-07-25 14:04:34.637765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.821 [2024-07-25 14:04:34.637847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.821 [2024-07-25 14:04:34.637865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.821 [2024-07-25 14:04:34.637874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.821 [2024-07-25 14:04:34.637882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.821 [2024-07-25 14:04:34.637900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.821 qpair failed and we were unable to recover it. 00:36:37.821 [2024-07-25 14:04:34.647887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.821 [2024-07-25 14:04:34.647969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.821 [2024-07-25 14:04:34.647986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.821 [2024-07-25 14:04:34.647995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.821 [2024-07-25 14:04:34.648004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.821 [2024-07-25 14:04:34.648022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.821 qpair failed and we were unable to recover it. 00:36:37.821 [2024-07-25 14:04:34.657880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.821 [2024-07-25 14:04:34.657969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.821 [2024-07-25 14:04:34.657986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.821 [2024-07-25 14:04:34.657995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.821 [2024-07-25 14:04:34.658003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.821 [2024-07-25 14:04:34.658021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.821 qpair failed and we were unable to recover it. 00:36:37.821 [2024-07-25 14:04:34.667858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.821 [2024-07-25 14:04:34.667939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.821 [2024-07-25 14:04:34.667956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.821 [2024-07-25 14:04:34.667965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.821 [2024-07-25 14:04:34.667973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.821 [2024-07-25 14:04:34.667991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.821 qpair failed and we were unable to recover it. 00:36:37.821 [2024-07-25 14:04:34.677929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.821 [2024-07-25 14:04:34.678009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.821 [2024-07-25 14:04:34.678026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.821 [2024-07-25 14:04:34.678035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.821 [2024-07-25 14:04:34.678043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.821 [2024-07-25 14:04:34.678061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.821 qpair failed and we were unable to recover it. 00:36:37.821 [2024-07-25 14:04:34.687968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.821 [2024-07-25 14:04:34.688142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.821 [2024-07-25 14:04:34.688159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.821 [2024-07-25 14:04:34.688169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.821 [2024-07-25 14:04:34.688178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.821 [2024-07-25 14:04:34.688197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.821 qpair failed and we were unable to recover it. 00:36:37.821 [2024-07-25 14:04:34.698017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.821 [2024-07-25 14:04:34.698099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.821 [2024-07-25 14:04:34.698116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.821 [2024-07-25 14:04:34.698128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.821 [2024-07-25 14:04:34.698136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:37.821 [2024-07-25 14:04:34.698155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.821 qpair failed and we were unable to recover it. 00:36:38.081 [2024-07-25 14:04:34.708044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.081 [2024-07-25 14:04:34.708128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.081 [2024-07-25 14:04:34.708145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.081 [2024-07-25 14:04:34.708154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.081 [2024-07-25 14:04:34.708162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:38.081 [2024-07-25 14:04:34.708181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:38.081 qpair failed and we were unable to recover it. 00:36:38.081 [2024-07-25 14:04:34.718074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.081 [2024-07-25 14:04:34.718160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.081 [2024-07-25 14:04:34.718177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.081 [2024-07-25 14:04:34.718187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.081 [2024-07-25 14:04:34.718195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:38.081 [2024-07-25 14:04:34.718214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:38.081 qpair failed and we were unable to recover it. 00:36:38.081 [2024-07-25 14:04:34.728025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.081 [2024-07-25 14:04:34.728107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.081 [2024-07-25 14:04:34.728124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.081 [2024-07-25 14:04:34.728133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.081 [2024-07-25 14:04:34.728141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:38.081 [2024-07-25 14:04:34.728160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:38.081 qpair failed and we were unable to recover it. 00:36:38.081 [2024-07-25 14:04:34.738096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.081 [2024-07-25 14:04:34.738182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.081 [2024-07-25 14:04:34.738198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.081 [2024-07-25 14:04:34.738207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.081 [2024-07-25 14:04:34.738216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:38.081 [2024-07-25 14:04:34.738233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:38.081 qpair failed and we were unable to recover it. 00:36:38.081 [2024-07-25 14:04:34.748146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.081 [2024-07-25 14:04:34.748228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.081 [2024-07-25 14:04:34.748244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.081 [2024-07-25 14:04:34.748253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.081 [2024-07-25 14:04:34.748262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:38.081 [2024-07-25 14:04:34.748280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:38.081 qpair failed and we were unable to recover it. 00:36:38.081 [2024-07-25 14:04:34.758160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.081 [2024-07-25 14:04:34.758259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.081 [2024-07-25 14:04:34.758276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.081 [2024-07-25 14:04:34.758285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.081 [2024-07-25 14:04:34.758293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:38.081 [2024-07-25 14:04:34.758311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:38.081 qpair failed and we were unable to recover it. 00:36:38.081 [2024-07-25 14:04:34.768270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.081 [2024-07-25 14:04:34.768364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.081 [2024-07-25 14:04:34.768381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.081 [2024-07-25 14:04:34.768390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.081 [2024-07-25 14:04:34.768399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:38.081 [2024-07-25 14:04:34.768417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:38.081 qpair failed and we were unable to recover it. 00:36:38.081 [2024-07-25 14:04:34.778236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.081 [2024-07-25 14:04:34.778312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.081 [2024-07-25 14:04:34.778329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.081 [2024-07-25 14:04:34.778338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.081 [2024-07-25 14:04:34.778347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:38.081 [2024-07-25 14:04:34.778365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:38.081 qpair failed and we were unable to recover it. 00:36:38.081 [2024-07-25 14:04:34.788256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.081 [2024-07-25 14:04:34.788333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.081 [2024-07-25 14:04:34.788353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.081 [2024-07-25 14:04:34.788362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.081 [2024-07-25 14:04:34.788371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:38.081 [2024-07-25 14:04:34.788388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:38.081 qpair failed and we were unable to recover it. 00:36:38.081 [2024-07-25 14:04:34.798240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.081 [2024-07-25 14:04:34.798318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.081 [2024-07-25 14:04:34.798335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.081 [2024-07-25 14:04:34.798344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.081 [2024-07-25 14:04:34.798352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:38.082 [2024-07-25 14:04:34.798370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:38.082 qpair failed and we were unable to recover it. 00:36:38.082 [2024-07-25 14:04:34.808259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.082 [2024-07-25 14:04:34.808425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.082 [2024-07-25 14:04:34.808443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.082 [2024-07-25 14:04:34.808452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.082 [2024-07-25 14:04:34.808461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:38.082 [2024-07-25 14:04:34.808479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:38.082 qpair failed and we were unable to recover it. 00:36:38.082 [2024-07-25 14:04:34.818375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.082 [2024-07-25 14:04:34.818455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.082 [2024-07-25 14:04:34.818472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.082 [2024-07-25 14:04:34.818481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.082 [2024-07-25 14:04:34.818489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:38.082 [2024-07-25 14:04:34.818506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:38.082 qpair failed and we were unable to recover it. 00:36:38.082 [2024-07-25 14:04:34.828306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.082 [2024-07-25 14:04:34.828384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.082 [2024-07-25 14:04:34.828402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.082 [2024-07-25 14:04:34.828412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.082 [2024-07-25 14:04:34.828420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:38.082 [2024-07-25 14:04:34.828441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:38.082 qpair failed and we were unable to recover it. 00:36:38.082 [2024-07-25 14:04:34.838390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.082 [2024-07-25 14:04:34.838469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.082 [2024-07-25 14:04:34.838486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.082 [2024-07-25 14:04:34.838495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.082 [2024-07-25 14:04:34.838503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:38.082 [2024-07-25 14:04:34.838521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:38.082 qpair failed and we were unable to recover it. 00:36:38.082 [2024-07-25 14:04:34.848418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.082 [2024-07-25 14:04:34.848500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.082 [2024-07-25 14:04:34.848516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.082 [2024-07-25 14:04:34.848525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.082 [2024-07-25 14:04:34.848534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:38.082 [2024-07-25 14:04:34.848552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:38.082 qpair failed and we were unable to recover it. 00:36:38.082 [2024-07-25 14:04:34.858488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.082 [2024-07-25 14:04:34.858617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.082 [2024-07-25 14:04:34.858636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.082 [2024-07-25 14:04:34.858645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.082 [2024-07-25 14:04:34.858653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:38.082 [2024-07-25 14:04:34.858671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:38.082 qpair failed and we were unable to recover it. 00:36:38.082 [2024-07-25 14:04:34.868469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.082 [2024-07-25 14:04:34.868551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.082 [2024-07-25 14:04:34.868568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.082 [2024-07-25 14:04:34.868577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.082 [2024-07-25 14:04:34.868585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:38.082 [2024-07-25 14:04:34.868603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:38.082 qpair failed and we were unable to recover it. 00:36:38.082 [2024-07-25 14:04:34.878471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.082 [2024-07-25 14:04:34.878585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.082 [2024-07-25 14:04:34.878606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.082 [2024-07-25 14:04:34.878615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.082 [2024-07-25 14:04:34.878624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:38.082 [2024-07-25 14:04:34.878642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:38.082 qpair failed and we were unable to recover it. 00:36:38.082 [2024-07-25 14:04:34.888528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.082 [2024-07-25 14:04:34.888612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.082 [2024-07-25 14:04:34.888628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.082 [2024-07-25 14:04:34.888637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.082 [2024-07-25 14:04:34.888646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:38.082 [2024-07-25 14:04:34.888664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:38.082 qpair failed and we were unable to recover it. 00:36:38.082 [2024-07-25 14:04:34.898560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.082 [2024-07-25 14:04:34.898641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.082 [2024-07-25 14:04:34.898658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.082 [2024-07-25 14:04:34.898667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.082 [2024-07-25 14:04:34.898676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:38.082 [2024-07-25 14:04:34.898694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:38.082 qpair failed and we were unable to recover it. 00:36:38.082 [2024-07-25 14:04:34.908591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.082 [2024-07-25 14:04:34.908682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.082 [2024-07-25 14:04:34.908699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.082 [2024-07-25 14:04:34.908709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.082 [2024-07-25 14:04:34.908723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:38.082 [2024-07-25 14:04:34.908742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:38.082 qpair failed and we were unable to recover it. 00:36:38.082 [2024-07-25 14:04:34.918626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.082 [2024-07-25 14:04:34.918741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.082 [2024-07-25 14:04:34.918771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.082 [2024-07-25 14:04:34.918786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.082 [2024-07-25 14:04:34.918803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.082 [2024-07-25 14:04:34.918831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.082 qpair failed and we were unable to recover it. 00:36:38.082 [2024-07-25 14:04:34.928639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.082 [2024-07-25 14:04:34.928728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.082 [2024-07-25 14:04:34.928747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.082 [2024-07-25 14:04:34.928757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.082 [2024-07-25 14:04:34.928765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.082 [2024-07-25 14:04:34.928783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.083 qpair failed and we were unable to recover it. 00:36:38.083 [2024-07-25 14:04:34.938660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.083 [2024-07-25 14:04:34.938751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.083 [2024-07-25 14:04:34.938768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.083 [2024-07-25 14:04:34.938778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.083 [2024-07-25 14:04:34.938786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.083 [2024-07-25 14:04:34.938804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.083 qpair failed and we were unable to recover it. 00:36:38.083 [2024-07-25 14:04:34.948649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.083 [2024-07-25 14:04:34.948734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.083 [2024-07-25 14:04:34.948751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.083 [2024-07-25 14:04:34.948761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.083 [2024-07-25 14:04:34.948769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.083 [2024-07-25 14:04:34.948787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.083 qpair failed and we were unable to recover it. 00:36:38.083 [2024-07-25 14:04:34.958733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.083 [2024-07-25 14:04:34.958819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.083 [2024-07-25 14:04:34.958836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.083 [2024-07-25 14:04:34.958845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.083 [2024-07-25 14:04:34.958854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.083 [2024-07-25 14:04:34.958871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.083 qpair failed and we were unable to recover it. 00:36:38.343 [2024-07-25 14:04:34.968783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.343 [2024-07-25 14:04:34.968870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.343 [2024-07-25 14:04:34.968888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.343 [2024-07-25 14:04:34.968897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.343 [2024-07-25 14:04:34.968906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.343 [2024-07-25 14:04:34.968923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.343 qpair failed and we were unable to recover it. 00:36:38.343 [2024-07-25 14:04:34.978789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.343 [2024-07-25 14:04:34.978872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.343 [2024-07-25 14:04:34.978889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.343 [2024-07-25 14:04:34.978898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.343 [2024-07-25 14:04:34.978907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.343 [2024-07-25 14:04:34.978924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.343 qpair failed and we were unable to recover it. 00:36:38.343 [2024-07-25 14:04:34.988832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.343 [2024-07-25 14:04:34.988911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.343 [2024-07-25 14:04:34.988929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.343 [2024-07-25 14:04:34.988938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.343 [2024-07-25 14:04:34.988947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.343 [2024-07-25 14:04:34.988964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.343 qpair failed and we were unable to recover it. 00:36:38.343 [2024-07-25 14:04:34.998841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.343 [2024-07-25 14:04:34.998928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.343 [2024-07-25 14:04:34.998945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.343 [2024-07-25 14:04:34.998955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.343 [2024-07-25 14:04:34.998963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.343 [2024-07-25 14:04:34.998981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.343 qpair failed and we were unable to recover it. 00:36:38.343 [2024-07-25 14:04:35.008895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.343 [2024-07-25 14:04:35.008974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.343 [2024-07-25 14:04:35.008992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.343 [2024-07-25 14:04:35.009001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.343 [2024-07-25 14:04:35.009013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.343 [2024-07-25 14:04:35.009031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.343 qpair failed and we were unable to recover it. 00:36:38.344 [2024-07-25 14:04:35.018954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.344 [2024-07-25 14:04:35.019066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.344 [2024-07-25 14:04:35.019083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.344 [2024-07-25 14:04:35.019093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.344 [2024-07-25 14:04:35.019101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.344 [2024-07-25 14:04:35.019119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.344 qpair failed and we were unable to recover it. 00:36:38.344 [2024-07-25 14:04:35.028957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.344 [2024-07-25 14:04:35.029080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.344 [2024-07-25 14:04:35.029098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.344 [2024-07-25 14:04:35.029107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.344 [2024-07-25 14:04:35.029116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.344 [2024-07-25 14:04:35.029133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.344 qpair failed and we were unable to recover it. 00:36:38.344 [2024-07-25 14:04:35.038975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.344 [2024-07-25 14:04:35.039058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.344 [2024-07-25 14:04:35.039076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.344 [2024-07-25 14:04:35.039084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.344 [2024-07-25 14:04:35.039093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.344 [2024-07-25 14:04:35.039110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.344 qpair failed and we were unable to recover it. 00:36:38.344 [2024-07-25 14:04:35.049034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.344 [2024-07-25 14:04:35.049137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.344 [2024-07-25 14:04:35.049155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.344 [2024-07-25 14:04:35.049164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.344 [2024-07-25 14:04:35.049173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.344 [2024-07-25 14:04:35.049190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.344 qpair failed and we were unable to recover it. 00:36:38.344 [2024-07-25 14:04:35.059034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.344 [2024-07-25 14:04:35.059124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.344 [2024-07-25 14:04:35.059141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.344 [2024-07-25 14:04:35.059150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.344 [2024-07-25 14:04:35.059158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.344 [2024-07-25 14:04:35.059176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.344 qpair failed and we were unable to recover it. 00:36:38.344 [2024-07-25 14:04:35.069061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.344 [2024-07-25 14:04:35.069141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.344 [2024-07-25 14:04:35.069157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.344 [2024-07-25 14:04:35.069167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.344 [2024-07-25 14:04:35.069175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.344 [2024-07-25 14:04:35.069192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.344 qpair failed and we were unable to recover it. 00:36:38.344 [2024-07-25 14:04:35.079127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.344 [2024-07-25 14:04:35.079204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.344 [2024-07-25 14:04:35.079221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.344 [2024-07-25 14:04:35.079230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.344 [2024-07-25 14:04:35.079239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.344 [2024-07-25 14:04:35.079255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.344 qpair failed and we were unable to recover it. 00:36:38.344 [2024-07-25 14:04:35.089078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.344 [2024-07-25 14:04:35.089160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.344 [2024-07-25 14:04:35.089177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.344 [2024-07-25 14:04:35.089186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.344 [2024-07-25 14:04:35.089194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.344 [2024-07-25 14:04:35.089211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.344 qpair failed and we were unable to recover it. 00:36:38.344 [2024-07-25 14:04:35.099142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.344 [2024-07-25 14:04:35.099226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.344 [2024-07-25 14:04:35.099243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.344 [2024-07-25 14:04:35.099252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.344 [2024-07-25 14:04:35.099264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.344 [2024-07-25 14:04:35.099281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.344 qpair failed and we were unable to recover it. 00:36:38.344 [2024-07-25 14:04:35.109167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.344 [2024-07-25 14:04:35.109291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.344 [2024-07-25 14:04:35.109310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.344 [2024-07-25 14:04:35.109319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.344 [2024-07-25 14:04:35.109327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.344 [2024-07-25 14:04:35.109345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.344 qpair failed and we were unable to recover it. 00:36:38.344 [2024-07-25 14:04:35.119288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.344 [2024-07-25 14:04:35.119368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.344 [2024-07-25 14:04:35.119385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.344 [2024-07-25 14:04:35.119394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.344 [2024-07-25 14:04:35.119403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.344 [2024-07-25 14:04:35.119420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.344 qpair failed and we were unable to recover it. 00:36:38.344 [2024-07-25 14:04:35.129211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.344 [2024-07-25 14:04:35.129293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.344 [2024-07-25 14:04:35.129311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.344 [2024-07-25 14:04:35.129320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.344 [2024-07-25 14:04:35.129329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.344 [2024-07-25 14:04:35.129345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.344 qpair failed and we were unable to recover it. 00:36:38.344 [2024-07-25 14:04:35.139254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.344 [2024-07-25 14:04:35.139334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.344 [2024-07-25 14:04:35.139352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.344 [2024-07-25 14:04:35.139361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.344 [2024-07-25 14:04:35.139369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.344 [2024-07-25 14:04:35.139387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.344 qpair failed and we were unable to recover it. 00:36:38.344 [2024-07-25 14:04:35.149290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.345 [2024-07-25 14:04:35.149373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.345 [2024-07-25 14:04:35.149390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.345 [2024-07-25 14:04:35.149399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.345 [2024-07-25 14:04:35.149407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.345 [2024-07-25 14:04:35.149424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.345 qpair failed and we were unable to recover it. 00:36:38.345 [2024-07-25 14:04:35.159318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.345 [2024-07-25 14:04:35.159395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.345 [2024-07-25 14:04:35.159413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.345 [2024-07-25 14:04:35.159422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.345 [2024-07-25 14:04:35.159431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.345 [2024-07-25 14:04:35.159447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.345 qpair failed and we were unable to recover it. 00:36:38.345 [2024-07-25 14:04:35.169355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.345 [2024-07-25 14:04:35.169439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.345 [2024-07-25 14:04:35.169456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.345 [2024-07-25 14:04:35.169465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.345 [2024-07-25 14:04:35.169473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.345 [2024-07-25 14:04:35.169490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.345 qpair failed and we were unable to recover it. 00:36:38.345 [2024-07-25 14:04:35.179382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.345 [2024-07-25 14:04:35.179468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.345 [2024-07-25 14:04:35.179485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.345 [2024-07-25 14:04:35.179494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.345 [2024-07-25 14:04:35.179502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.345 [2024-07-25 14:04:35.179519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.345 qpair failed and we were unable to recover it. 00:36:38.345 [2024-07-25 14:04:35.189385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.345 [2024-07-25 14:04:35.189467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.345 [2024-07-25 14:04:35.189484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.345 [2024-07-25 14:04:35.189496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.345 [2024-07-25 14:04:35.189505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.345 [2024-07-25 14:04:35.189522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.345 qpair failed and we were unable to recover it. 00:36:38.345 [2024-07-25 14:04:35.199442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.345 [2024-07-25 14:04:35.199527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.345 [2024-07-25 14:04:35.199544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.345 [2024-07-25 14:04:35.199553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.345 [2024-07-25 14:04:35.199562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.345 [2024-07-25 14:04:35.199579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.345 qpair failed and we were unable to recover it. 00:36:38.345 [2024-07-25 14:04:35.209558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.345 [2024-07-25 14:04:35.209640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.345 [2024-07-25 14:04:35.209657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.345 [2024-07-25 14:04:35.209666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.345 [2024-07-25 14:04:35.209675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.345 [2024-07-25 14:04:35.209692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.345 qpair failed and we were unable to recover it. 00:36:38.345 [2024-07-25 14:04:35.219471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.345 [2024-07-25 14:04:35.219563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.345 [2024-07-25 14:04:35.219581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.345 [2024-07-25 14:04:35.219590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.345 [2024-07-25 14:04:35.219599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.345 [2024-07-25 14:04:35.219617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.345 qpair failed and we were unable to recover it. 00:36:38.345 [2024-07-25 14:04:35.229464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.345 [2024-07-25 14:04:35.229550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.345 [2024-07-25 14:04:35.229567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.345 [2024-07-25 14:04:35.229576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.345 [2024-07-25 14:04:35.229585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.345 [2024-07-25 14:04:35.229602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.345 qpair failed and we were unable to recover it. 00:36:38.605 [2024-07-25 14:04:35.239551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.605 [2024-07-25 14:04:35.239628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.605 [2024-07-25 14:04:35.239645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.605 [2024-07-25 14:04:35.239655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.605 [2024-07-25 14:04:35.239663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.605 [2024-07-25 14:04:35.239680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.605 qpair failed and we were unable to recover it. 00:36:38.605 [2024-07-25 14:04:35.249588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.605 [2024-07-25 14:04:35.249668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.605 [2024-07-25 14:04:35.249686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.605 [2024-07-25 14:04:35.249695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.605 [2024-07-25 14:04:35.249703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.605 [2024-07-25 14:04:35.249724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.605 qpair failed and we were unable to recover it. 00:36:38.605 [2024-07-25 14:04:35.259600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.605 [2024-07-25 14:04:35.259681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.605 [2024-07-25 14:04:35.259699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.605 [2024-07-25 14:04:35.259708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.605 [2024-07-25 14:04:35.259721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.605 [2024-07-25 14:04:35.259738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.605 qpair failed and we were unable to recover it. 00:36:38.605 [2024-07-25 14:04:35.269629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.605 [2024-07-25 14:04:35.269707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.605 [2024-07-25 14:04:35.269727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.605 [2024-07-25 14:04:35.269737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.605 [2024-07-25 14:04:35.269745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.605 [2024-07-25 14:04:35.269762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.605 qpair failed and we were unable to recover it. 00:36:38.605 [2024-07-25 14:04:35.279639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.605 [2024-07-25 14:04:35.279724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.606 [2024-07-25 14:04:35.279741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.606 [2024-07-25 14:04:35.279754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.606 [2024-07-25 14:04:35.279762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.606 [2024-07-25 14:04:35.279780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.606 qpair failed and we were unable to recover it. 00:36:38.606 [2024-07-25 14:04:35.289710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.606 [2024-07-25 14:04:35.289795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.606 [2024-07-25 14:04:35.289812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.606 [2024-07-25 14:04:35.289821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.606 [2024-07-25 14:04:35.289829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.606 [2024-07-25 14:04:35.289846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.606 qpair failed and we were unable to recover it. 00:36:38.606 [2024-07-25 14:04:35.299718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.606 [2024-07-25 14:04:35.299797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.606 [2024-07-25 14:04:35.299814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.606 [2024-07-25 14:04:35.299824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.606 [2024-07-25 14:04:35.299832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.606 [2024-07-25 14:04:35.299849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.606 qpair failed and we were unable to recover it. 00:36:38.606 [2024-07-25 14:04:35.309744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.606 [2024-07-25 14:04:35.309820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.606 [2024-07-25 14:04:35.309839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.606 [2024-07-25 14:04:35.309848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.606 [2024-07-25 14:04:35.309856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.606 [2024-07-25 14:04:35.309874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.606 qpair failed and we were unable to recover it. 00:36:38.606 [2024-07-25 14:04:35.319687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.606 [2024-07-25 14:04:35.319770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.606 [2024-07-25 14:04:35.319788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.606 [2024-07-25 14:04:35.319797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.606 [2024-07-25 14:04:35.319805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.606 [2024-07-25 14:04:35.319822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.606 qpair failed and we were unable to recover it. 00:36:38.606 [2024-07-25 14:04:35.329802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.606 [2024-07-25 14:04:35.329881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.606 [2024-07-25 14:04:35.329899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.606 [2024-07-25 14:04:35.329908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.606 [2024-07-25 14:04:35.329916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.606 [2024-07-25 14:04:35.329933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.606 qpair failed and we were unable to recover it. 00:36:38.606 [2024-07-25 14:04:35.339757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.606 [2024-07-25 14:04:35.339855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.606 [2024-07-25 14:04:35.339872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.606 [2024-07-25 14:04:35.339881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.606 [2024-07-25 14:04:35.339889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.606 [2024-07-25 14:04:35.339906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.606 qpair failed and we were unable to recover it. 00:36:38.606 [2024-07-25 14:04:35.349854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.606 [2024-07-25 14:04:35.349937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.606 [2024-07-25 14:04:35.349954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.606 [2024-07-25 14:04:35.349963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.606 [2024-07-25 14:04:35.349972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.606 [2024-07-25 14:04:35.349988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.606 qpair failed and we were unable to recover it. 00:36:38.606 [2024-07-25 14:04:35.359916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.606 [2024-07-25 14:04:35.359994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.606 [2024-07-25 14:04:35.360012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.606 [2024-07-25 14:04:35.360020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.606 [2024-07-25 14:04:35.360029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.606 [2024-07-25 14:04:35.360046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.606 qpair failed and we were unable to recover it. 00:36:38.606 [2024-07-25 14:04:35.369923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.606 [2024-07-25 14:04:35.370006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.606 [2024-07-25 14:04:35.370023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.606 [2024-07-25 14:04:35.370036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.606 [2024-07-25 14:04:35.370044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.606 [2024-07-25 14:04:35.370061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.606 qpair failed and we were unable to recover it. 00:36:38.606 [2024-07-25 14:04:35.379954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.606 [2024-07-25 14:04:35.380066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.606 [2024-07-25 14:04:35.380120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.606 [2024-07-25 14:04:35.380129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.606 [2024-07-25 14:04:35.380138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.606 [2024-07-25 14:04:35.380155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.606 qpair failed and we were unable to recover it. 00:36:38.606 [2024-07-25 14:04:35.389976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.606 [2024-07-25 14:04:35.390056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.606 [2024-07-25 14:04:35.390073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.606 [2024-07-25 14:04:35.390082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.606 [2024-07-25 14:04:35.390091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.606 [2024-07-25 14:04:35.390108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.606 qpair failed and we were unable to recover it. 00:36:38.606 [2024-07-25 14:04:35.400003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.606 [2024-07-25 14:04:35.400085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.606 [2024-07-25 14:04:35.400103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.606 [2024-07-25 14:04:35.400112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.606 [2024-07-25 14:04:35.400120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.606 [2024-07-25 14:04:35.400137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.606 qpair failed and we were unable to recover it. 00:36:38.606 [2024-07-25 14:04:35.410032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.607 [2024-07-25 14:04:35.410109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.607 [2024-07-25 14:04:35.410127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.607 [2024-07-25 14:04:35.410136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.607 [2024-07-25 14:04:35.410145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.607 [2024-07-25 14:04:35.410162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.607 qpair failed and we were unable to recover it. 00:36:38.607 [2024-07-25 14:04:35.420050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.607 [2024-07-25 14:04:35.420129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.607 [2024-07-25 14:04:35.420146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.607 [2024-07-25 14:04:35.420155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.607 [2024-07-25 14:04:35.420163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.607 [2024-07-25 14:04:35.420180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.607 qpair failed and we were unable to recover it. 00:36:38.607 [2024-07-25 14:04:35.430009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.607 [2024-07-25 14:04:35.430096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.607 [2024-07-25 14:04:35.430113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.607 [2024-07-25 14:04:35.430122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.607 [2024-07-25 14:04:35.430131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.607 [2024-07-25 14:04:35.430148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.607 qpair failed and we were unable to recover it. 00:36:38.607 [2024-07-25 14:04:35.440106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.607 [2024-07-25 14:04:35.440188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.607 [2024-07-25 14:04:35.440205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.607 [2024-07-25 14:04:35.440214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.607 [2024-07-25 14:04:35.440223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.607 [2024-07-25 14:04:35.440239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.607 qpair failed and we were unable to recover it. 00:36:38.607 [2024-07-25 14:04:35.450192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.607 [2024-07-25 14:04:35.450275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.607 [2024-07-25 14:04:35.450292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.607 [2024-07-25 14:04:35.450301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.607 [2024-07-25 14:04:35.450309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.607 [2024-07-25 14:04:35.450326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.607 qpair failed and we were unable to recover it. 00:36:38.607 [2024-07-25 14:04:35.460161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.607 [2024-07-25 14:04:35.460242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.607 [2024-07-25 14:04:35.460259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.607 [2024-07-25 14:04:35.460273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.607 [2024-07-25 14:04:35.460282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.607 [2024-07-25 14:04:35.460299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.607 qpair failed and we were unable to recover it. 00:36:38.607 [2024-07-25 14:04:35.470221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.607 [2024-07-25 14:04:35.470339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.607 [2024-07-25 14:04:35.470357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.607 [2024-07-25 14:04:35.470366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.607 [2024-07-25 14:04:35.470374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.607 [2024-07-25 14:04:35.470391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.607 qpair failed and we were unable to recover it. 00:36:38.607 [2024-07-25 14:04:35.480239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.607 [2024-07-25 14:04:35.480321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.607 [2024-07-25 14:04:35.480338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.607 [2024-07-25 14:04:35.480347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.607 [2024-07-25 14:04:35.480355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.607 [2024-07-25 14:04:35.480372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.607 qpair failed and we were unable to recover it. 00:36:38.607 [2024-07-25 14:04:35.490224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.607 [2024-07-25 14:04:35.490305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.607 [2024-07-25 14:04:35.490322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.607 [2024-07-25 14:04:35.490331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.607 [2024-07-25 14:04:35.490340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.607 [2024-07-25 14:04:35.490356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.607 qpair failed and we were unable to recover it. 00:36:38.867 [2024-07-25 14:04:35.500290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.867 [2024-07-25 14:04:35.500416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.867 [2024-07-25 14:04:35.500434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.867 [2024-07-25 14:04:35.500443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.867 [2024-07-25 14:04:35.500452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.867 [2024-07-25 14:04:35.500470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.867 qpair failed and we were unable to recover it. 00:36:38.867 [2024-07-25 14:04:35.510342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.867 [2024-07-25 14:04:35.510423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.867 [2024-07-25 14:04:35.510443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.867 [2024-07-25 14:04:35.510452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.867 [2024-07-25 14:04:35.510461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.867 [2024-07-25 14:04:35.510479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.867 qpair failed and we were unable to recover it. 00:36:38.867 [2024-07-25 14:04:35.520350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.867 [2024-07-25 14:04:35.520466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.867 [2024-07-25 14:04:35.520485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.867 [2024-07-25 14:04:35.520494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.867 [2024-07-25 14:04:35.520503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.867 [2024-07-25 14:04:35.520520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.867 qpair failed and we were unable to recover it. 00:36:38.867 [2024-07-25 14:04:35.530379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.867 [2024-07-25 14:04:35.530460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.867 [2024-07-25 14:04:35.530478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.867 [2024-07-25 14:04:35.530487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.867 [2024-07-25 14:04:35.530495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.867 [2024-07-25 14:04:35.530512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.867 qpair failed and we were unable to recover it. 00:36:38.867 [2024-07-25 14:04:35.540399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.867 [2024-07-25 14:04:35.540478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.867 [2024-07-25 14:04:35.540495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.867 [2024-07-25 14:04:35.540504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.867 [2024-07-25 14:04:35.540513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.867 [2024-07-25 14:04:35.540530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.867 qpair failed and we were unable to recover it. 00:36:38.867 [2024-07-25 14:04:35.550420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.867 [2024-07-25 14:04:35.550502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.867 [2024-07-25 14:04:35.550523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.867 [2024-07-25 14:04:35.550532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.867 [2024-07-25 14:04:35.550540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.867 [2024-07-25 14:04:35.550557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.867 qpair failed and we were unable to recover it. 00:36:38.867 [2024-07-25 14:04:35.560461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.867 [2024-07-25 14:04:35.560583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.867 [2024-07-25 14:04:35.560602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.867 [2024-07-25 14:04:35.560611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.867 [2024-07-25 14:04:35.560619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.867 [2024-07-25 14:04:35.560638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.867 qpair failed and we were unable to recover it. 00:36:38.867 [2024-07-25 14:04:35.570466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.867 [2024-07-25 14:04:35.570633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.867 [2024-07-25 14:04:35.570651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.867 [2024-07-25 14:04:35.570661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.867 [2024-07-25 14:04:35.570669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.867 [2024-07-25 14:04:35.570687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.867 qpair failed and we were unable to recover it. 00:36:38.867 [2024-07-25 14:04:35.580481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.867 [2024-07-25 14:04:35.580572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.867 [2024-07-25 14:04:35.580590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.867 [2024-07-25 14:04:35.580599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.867 [2024-07-25 14:04:35.580607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.867 [2024-07-25 14:04:35.580624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.867 qpair failed and we were unable to recover it. 00:36:38.867 [2024-07-25 14:04:35.590538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.867 [2024-07-25 14:04:35.590618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.867 [2024-07-25 14:04:35.590635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.867 [2024-07-25 14:04:35.590644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.867 [2024-07-25 14:04:35.590653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.867 [2024-07-25 14:04:35.590669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.867 qpair failed and we were unable to recover it. 00:36:38.867 [2024-07-25 14:04:35.600579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.867 [2024-07-25 14:04:35.600656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.867 [2024-07-25 14:04:35.600673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.867 [2024-07-25 14:04:35.600682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.867 [2024-07-25 14:04:35.600690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.867 [2024-07-25 14:04:35.600707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.867 qpair failed and we were unable to recover it. 00:36:38.867 [2024-07-25 14:04:35.610620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.867 [2024-07-25 14:04:35.610735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.867 [2024-07-25 14:04:35.610751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.867 [2024-07-25 14:04:35.610760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.867 [2024-07-25 14:04:35.610769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.867 [2024-07-25 14:04:35.610786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.868 qpair failed and we were unable to recover it. 00:36:38.868 [2024-07-25 14:04:35.620633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.868 [2024-07-25 14:04:35.620721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.868 [2024-07-25 14:04:35.620738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.868 [2024-07-25 14:04:35.620748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.868 [2024-07-25 14:04:35.620756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.868 [2024-07-25 14:04:35.620773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.868 qpair failed and we were unable to recover it. 00:36:38.868 [2024-07-25 14:04:35.630634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.868 [2024-07-25 14:04:35.630723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.868 [2024-07-25 14:04:35.630741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.868 [2024-07-25 14:04:35.630750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.868 [2024-07-25 14:04:35.630758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.868 [2024-07-25 14:04:35.630775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.868 qpair failed and we were unable to recover it. 00:36:38.868 [2024-07-25 14:04:35.640681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.868 [2024-07-25 14:04:35.640759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.868 [2024-07-25 14:04:35.640779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.868 [2024-07-25 14:04:35.640788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.868 [2024-07-25 14:04:35.640797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.868 [2024-07-25 14:04:35.640813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.868 qpair failed and we were unable to recover it. 00:36:38.868 [2024-07-25 14:04:35.650723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.868 [2024-07-25 14:04:35.650802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.868 [2024-07-25 14:04:35.650819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.868 [2024-07-25 14:04:35.650828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.868 [2024-07-25 14:04:35.650837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.868 [2024-07-25 14:04:35.650854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.868 qpair failed and we were unable to recover it. 00:36:38.868 [2024-07-25 14:04:35.660732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.868 [2024-07-25 14:04:35.660813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.868 [2024-07-25 14:04:35.660830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.868 [2024-07-25 14:04:35.660839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.868 [2024-07-25 14:04:35.660848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.868 [2024-07-25 14:04:35.660864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.868 qpair failed and we were unable to recover it. 00:36:38.868 [2024-07-25 14:04:35.670775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.868 [2024-07-25 14:04:35.670854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.868 [2024-07-25 14:04:35.670871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.868 [2024-07-25 14:04:35.670880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.868 [2024-07-25 14:04:35.670889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.868 [2024-07-25 14:04:35.670906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.868 qpair failed and we were unable to recover it. 00:36:38.868 [2024-07-25 14:04:35.680822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.868 [2024-07-25 14:04:35.680895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.868 [2024-07-25 14:04:35.680912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.868 [2024-07-25 14:04:35.680921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.868 [2024-07-25 14:04:35.680929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.868 [2024-07-25 14:04:35.680949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.868 qpair failed and we were unable to recover it. 00:36:38.868 [2024-07-25 14:04:35.690820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.868 [2024-07-25 14:04:35.690900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.868 [2024-07-25 14:04:35.690917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.868 [2024-07-25 14:04:35.690926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.868 [2024-07-25 14:04:35.690934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.868 [2024-07-25 14:04:35.690951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.868 qpair failed and we were unable to recover it. 00:36:38.868 [2024-07-25 14:04:35.700845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.868 [2024-07-25 14:04:35.700928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.868 [2024-07-25 14:04:35.700944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.868 [2024-07-25 14:04:35.700953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.868 [2024-07-25 14:04:35.700962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.868 [2024-07-25 14:04:35.700979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.868 qpair failed and we were unable to recover it. 00:36:38.868 [2024-07-25 14:04:35.710874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.868 [2024-07-25 14:04:35.710954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.868 [2024-07-25 14:04:35.710971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.868 [2024-07-25 14:04:35.710980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.868 [2024-07-25 14:04:35.710988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.868 [2024-07-25 14:04:35.711005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.868 qpair failed and we were unable to recover it. 00:36:38.868 [2024-07-25 14:04:35.720986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.868 [2024-07-25 14:04:35.721068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.868 [2024-07-25 14:04:35.721085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.868 [2024-07-25 14:04:35.721094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.868 [2024-07-25 14:04:35.721102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.868 [2024-07-25 14:04:35.721119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.868 qpair failed and we were unable to recover it. 00:36:38.868 [2024-07-25 14:04:35.730941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.868 [2024-07-25 14:04:35.731021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.868 [2024-07-25 14:04:35.731041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.868 [2024-07-25 14:04:35.731050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.868 [2024-07-25 14:04:35.731059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.868 [2024-07-25 14:04:35.731075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.868 qpair failed and we were unable to recover it. 00:36:38.868 [2024-07-25 14:04:35.740957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.868 [2024-07-25 14:04:35.741038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.868 [2024-07-25 14:04:35.741055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.868 [2024-07-25 14:04:35.741064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.868 [2024-07-25 14:04:35.741073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.868 [2024-07-25 14:04:35.741089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.868 qpair failed and we were unable to recover it. 00:36:38.868 [2024-07-25 14:04:35.750924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.869 [2024-07-25 14:04:35.751013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.869 [2024-07-25 14:04:35.751031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.869 [2024-07-25 14:04:35.751040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.869 [2024-07-25 14:04:35.751048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:38.869 [2024-07-25 14:04:35.751066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:38.869 qpair failed and we were unable to recover it. 00:36:39.128 [2024-07-25 14:04:35.760990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.128 [2024-07-25 14:04:35.761072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.129 [2024-07-25 14:04:35.761090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.129 [2024-07-25 14:04:35.761099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.129 [2024-07-25 14:04:35.761107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.129 [2024-07-25 14:04:35.761124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.129 qpair failed and we were unable to recover it. 00:36:39.129 [2024-07-25 14:04:35.771078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.129 [2024-07-25 14:04:35.771159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.129 [2024-07-25 14:04:35.771177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.129 [2024-07-25 14:04:35.771186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.129 [2024-07-25 14:04:35.771194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.129 [2024-07-25 14:04:35.771215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.129 qpair failed and we were unable to recover it. 00:36:39.129 [2024-07-25 14:04:35.781062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.129 [2024-07-25 14:04:35.781145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.129 [2024-07-25 14:04:35.781162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.129 [2024-07-25 14:04:35.781171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.129 [2024-07-25 14:04:35.781179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.129 [2024-07-25 14:04:35.781195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.129 qpair failed and we were unable to recover it. 00:36:39.129 [2024-07-25 14:04:35.791072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.129 [2024-07-25 14:04:35.791242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.129 [2024-07-25 14:04:35.791260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.129 [2024-07-25 14:04:35.791269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.129 [2024-07-25 14:04:35.791277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.129 [2024-07-25 14:04:35.791295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.129 qpair failed and we were unable to recover it. 00:36:39.129 [2024-07-25 14:04:35.801036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.129 [2024-07-25 14:04:35.801113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.129 [2024-07-25 14:04:35.801130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.129 [2024-07-25 14:04:35.801139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.129 [2024-07-25 14:04:35.801147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.129 [2024-07-25 14:04:35.801164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.129 qpair failed and we were unable to recover it. 00:36:39.129 [2024-07-25 14:04:35.811199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.129 [2024-07-25 14:04:35.811282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.129 [2024-07-25 14:04:35.811299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.129 [2024-07-25 14:04:35.811308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.129 [2024-07-25 14:04:35.811317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.129 [2024-07-25 14:04:35.811334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.129 qpair failed and we were unable to recover it. 00:36:39.129 [2024-07-25 14:04:35.821172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.129 [2024-07-25 14:04:35.821253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.129 [2024-07-25 14:04:35.821273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.129 [2024-07-25 14:04:35.821282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.129 [2024-07-25 14:04:35.821290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.129 [2024-07-25 14:04:35.821307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.129 qpair failed and we were unable to recover it. 00:36:39.129 [2024-07-25 14:04:35.831213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.129 [2024-07-25 14:04:35.831293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.129 [2024-07-25 14:04:35.831311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.129 [2024-07-25 14:04:35.831320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.129 [2024-07-25 14:04:35.831328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.129 [2024-07-25 14:04:35.831345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.129 qpair failed and we were unable to recover it. 00:36:39.129 [2024-07-25 14:04:35.841276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.129 [2024-07-25 14:04:35.841382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.129 [2024-07-25 14:04:35.841399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.129 [2024-07-25 14:04:35.841408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.129 [2024-07-25 14:04:35.841417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.129 [2024-07-25 14:04:35.841434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.129 qpair failed and we were unable to recover it. 00:36:39.129 [2024-07-25 14:04:35.851247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.129 [2024-07-25 14:04:35.851324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.129 [2024-07-25 14:04:35.851341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.129 [2024-07-25 14:04:35.851350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.129 [2024-07-25 14:04:35.851359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.129 [2024-07-25 14:04:35.851375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.129 qpair failed and we were unable to recover it. 00:36:39.129 [2024-07-25 14:04:35.861285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.129 [2024-07-25 14:04:35.861368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.129 [2024-07-25 14:04:35.861385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.129 [2024-07-25 14:04:35.861395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.129 [2024-07-25 14:04:35.861406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.129 [2024-07-25 14:04:35.861423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.129 qpair failed and we were unable to recover it. 00:36:39.129 [2024-07-25 14:04:35.871303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.129 [2024-07-25 14:04:35.871389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.129 [2024-07-25 14:04:35.871406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.129 [2024-07-25 14:04:35.871415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.129 [2024-07-25 14:04:35.871423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.129 [2024-07-25 14:04:35.871440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.129 qpair failed and we were unable to recover it. 00:36:39.129 [2024-07-25 14:04:35.881339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.129 [2024-07-25 14:04:35.881419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.129 [2024-07-25 14:04:35.881437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.129 [2024-07-25 14:04:35.881446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.129 [2024-07-25 14:04:35.881454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.129 [2024-07-25 14:04:35.881471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.129 qpair failed and we were unable to recover it. 00:36:39.129 [2024-07-25 14:04:35.891380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.129 [2024-07-25 14:04:35.891461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.130 [2024-07-25 14:04:35.891479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.130 [2024-07-25 14:04:35.891488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.130 [2024-07-25 14:04:35.891497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.130 [2024-07-25 14:04:35.891514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.130 qpair failed and we were unable to recover it. 00:36:39.130 [2024-07-25 14:04:35.901390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.130 [2024-07-25 14:04:35.901473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.130 [2024-07-25 14:04:35.901491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.130 [2024-07-25 14:04:35.901500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.130 [2024-07-25 14:04:35.901508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.130 [2024-07-25 14:04:35.901525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.130 qpair failed and we were unable to recover it. 00:36:39.130 [2024-07-25 14:04:35.911418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.130 [2024-07-25 14:04:35.911501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.130 [2024-07-25 14:04:35.911522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.130 [2024-07-25 14:04:35.911531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.130 [2024-07-25 14:04:35.911539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.130 [2024-07-25 14:04:35.911557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.130 qpair failed and we were unable to recover it. 00:36:39.130 [2024-07-25 14:04:35.921455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.130 [2024-07-25 14:04:35.921538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.130 [2024-07-25 14:04:35.921555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.130 [2024-07-25 14:04:35.921564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.130 [2024-07-25 14:04:35.921572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.130 [2024-07-25 14:04:35.921589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.130 qpair failed and we were unable to recover it. 00:36:39.130 [2024-07-25 14:04:35.931486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.130 [2024-07-25 14:04:35.931573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.130 [2024-07-25 14:04:35.931590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.130 [2024-07-25 14:04:35.931600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.130 [2024-07-25 14:04:35.931608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.130 [2024-07-25 14:04:35.931625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.130 qpair failed and we were unable to recover it. 00:36:39.130 [2024-07-25 14:04:35.941488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.130 [2024-07-25 14:04:35.941572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.130 [2024-07-25 14:04:35.941590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.130 [2024-07-25 14:04:35.941599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.130 [2024-07-25 14:04:35.941607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.130 [2024-07-25 14:04:35.941624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.130 qpair failed and we were unable to recover it. 00:36:39.130 [2024-07-25 14:04:35.951587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.130 [2024-07-25 14:04:35.951665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.130 [2024-07-25 14:04:35.951683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.130 [2024-07-25 14:04:35.951692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.130 [2024-07-25 14:04:35.951703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.130 [2024-07-25 14:04:35.951725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.130 qpair failed and we were unable to recover it. 00:36:39.130 [2024-07-25 14:04:35.961579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.130 [2024-07-25 14:04:35.961660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.130 [2024-07-25 14:04:35.961678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.130 [2024-07-25 14:04:35.961687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.130 [2024-07-25 14:04:35.961696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.130 [2024-07-25 14:04:35.961713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.130 qpair failed and we were unable to recover it. 00:36:39.130 [2024-07-25 14:04:35.971595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.130 [2024-07-25 14:04:35.971676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.130 [2024-07-25 14:04:35.971694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.130 [2024-07-25 14:04:35.971703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.130 [2024-07-25 14:04:35.971711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.130 [2024-07-25 14:04:35.971733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.130 qpair failed and we were unable to recover it. 00:36:39.130 [2024-07-25 14:04:35.981604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.130 [2024-07-25 14:04:35.981701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.130 [2024-07-25 14:04:35.981723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.130 [2024-07-25 14:04:35.981732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.130 [2024-07-25 14:04:35.981740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.130 [2024-07-25 14:04:35.981757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.130 qpair failed and we were unable to recover it. 00:36:39.130 [2024-07-25 14:04:35.991664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.130 [2024-07-25 14:04:35.991756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.130 [2024-07-25 14:04:35.991772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.130 [2024-07-25 14:04:35.991781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.130 [2024-07-25 14:04:35.991789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.130 [2024-07-25 14:04:35.991806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.130 qpair failed and we were unable to recover it. 00:36:39.130 [2024-07-25 14:04:36.001682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.130 [2024-07-25 14:04:36.001765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.130 [2024-07-25 14:04:36.001783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.130 [2024-07-25 14:04:36.001792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.130 [2024-07-25 14:04:36.001800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.130 [2024-07-25 14:04:36.001817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.130 qpair failed and we were unable to recover it. 00:36:39.130 [2024-07-25 14:04:36.011710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.130 [2024-07-25 14:04:36.011796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.130 [2024-07-25 14:04:36.011814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.130 [2024-07-25 14:04:36.011823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.130 [2024-07-25 14:04:36.011832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.130 [2024-07-25 14:04:36.011849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.130 qpair failed and we were unable to recover it. 00:36:39.391 [2024-07-25 14:04:36.021722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.391 [2024-07-25 14:04:36.021853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.391 [2024-07-25 14:04:36.021871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.391 [2024-07-25 14:04:36.021881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.391 [2024-07-25 14:04:36.021890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.391 [2024-07-25 14:04:36.021908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.391 qpair failed and we were unable to recover it. 00:36:39.391 [2024-07-25 14:04:36.031761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.391 [2024-07-25 14:04:36.031933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.391 [2024-07-25 14:04:36.031951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.391 [2024-07-25 14:04:36.031961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.391 [2024-07-25 14:04:36.031970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.391 [2024-07-25 14:04:36.031988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.391 qpair failed and we were unable to recover it. 00:36:39.391 [2024-07-25 14:04:36.041879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.391 [2024-07-25 14:04:36.041959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.391 [2024-07-25 14:04:36.041976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.391 [2024-07-25 14:04:36.041985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.391 [2024-07-25 14:04:36.041997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.391 [2024-07-25 14:04:36.042015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.391 qpair failed and we were unable to recover it. 00:36:39.391 [2024-07-25 14:04:36.051826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.391 [2024-07-25 14:04:36.051905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.391 [2024-07-25 14:04:36.051922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.391 [2024-07-25 14:04:36.051931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.391 [2024-07-25 14:04:36.051940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.391 [2024-07-25 14:04:36.051957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.391 qpair failed and we were unable to recover it. 00:36:39.391 [2024-07-25 14:04:36.061869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.391 [2024-07-25 14:04:36.061949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.391 [2024-07-25 14:04:36.061967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.391 [2024-07-25 14:04:36.061975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.391 [2024-07-25 14:04:36.061984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.391 [2024-07-25 14:04:36.062001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.391 qpair failed and we were unable to recover it. 00:36:39.391 [2024-07-25 14:04:36.071869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.391 [2024-07-25 14:04:36.071948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.391 [2024-07-25 14:04:36.071965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.391 [2024-07-25 14:04:36.071974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.391 [2024-07-25 14:04:36.071982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.391 [2024-07-25 14:04:36.071999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.391 qpair failed and we were unable to recover it. 00:36:39.391 [2024-07-25 14:04:36.081975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.391 [2024-07-25 14:04:36.082062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.391 [2024-07-25 14:04:36.082079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.391 [2024-07-25 14:04:36.082088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.391 [2024-07-25 14:04:36.082096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.391 [2024-07-25 14:04:36.082114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.391 qpair failed and we were unable to recover it. 00:36:39.391 [2024-07-25 14:04:36.091943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.391 [2024-07-25 14:04:36.092025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.391 [2024-07-25 14:04:36.092042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.391 [2024-07-25 14:04:36.092052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.391 [2024-07-25 14:04:36.092060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.391 [2024-07-25 14:04:36.092077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.391 qpair failed and we were unable to recover it. 00:36:39.391 [2024-07-25 14:04:36.101896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.391 [2024-07-25 14:04:36.101980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.392 [2024-07-25 14:04:36.101997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.392 [2024-07-25 14:04:36.102006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.392 [2024-07-25 14:04:36.102014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.392 [2024-07-25 14:04:36.102031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.392 qpair failed and we were unable to recover it. 00:36:39.392 [2024-07-25 14:04:36.112013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.392 [2024-07-25 14:04:36.112116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.392 [2024-07-25 14:04:36.112133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.392 [2024-07-25 14:04:36.112142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.392 [2024-07-25 14:04:36.112151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.392 [2024-07-25 14:04:36.112168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.392 qpair failed and we were unable to recover it. 00:36:39.392 [2024-07-25 14:04:36.121989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.392 [2024-07-25 14:04:36.122071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.392 [2024-07-25 14:04:36.122088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.392 [2024-07-25 14:04:36.122097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.392 [2024-07-25 14:04:36.122105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.392 [2024-07-25 14:04:36.122123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.392 qpair failed and we were unable to recover it. 00:36:39.392 [2024-07-25 14:04:36.132015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.392 [2024-07-25 14:04:36.132095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.392 [2024-07-25 14:04:36.132113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.392 [2024-07-25 14:04:36.132122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.392 [2024-07-25 14:04:36.132134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.392 [2024-07-25 14:04:36.132151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.392 qpair failed and we were unable to recover it. 00:36:39.392 [2024-07-25 14:04:36.142059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.392 [2024-07-25 14:04:36.142142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.392 [2024-07-25 14:04:36.142159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.392 [2024-07-25 14:04:36.142168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.392 [2024-07-25 14:04:36.142176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.392 [2024-07-25 14:04:36.142193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.392 qpair failed and we were unable to recover it. 00:36:39.392 [2024-07-25 14:04:36.152092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.392 [2024-07-25 14:04:36.152177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.392 [2024-07-25 14:04:36.152194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.392 [2024-07-25 14:04:36.152203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.392 [2024-07-25 14:04:36.152212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.392 [2024-07-25 14:04:36.152229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.392 qpair failed and we were unable to recover it. 00:36:39.392 [2024-07-25 14:04:36.162034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.392 [2024-07-25 14:04:36.162118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.392 [2024-07-25 14:04:36.162135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.392 [2024-07-25 14:04:36.162144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.392 [2024-07-25 14:04:36.162152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.392 [2024-07-25 14:04:36.162169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.392 qpair failed and we were unable to recover it. 00:36:39.392 [2024-07-25 14:04:36.172085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.392 [2024-07-25 14:04:36.172173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.392 [2024-07-25 14:04:36.172190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.392 [2024-07-25 14:04:36.172199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.392 [2024-07-25 14:04:36.172207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.392 [2024-07-25 14:04:36.172224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.392 qpair failed and we were unable to recover it. 00:36:39.392 [2024-07-25 14:04:36.182126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.392 [2024-07-25 14:04:36.182204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.392 [2024-07-25 14:04:36.182222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.392 [2024-07-25 14:04:36.182231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.392 [2024-07-25 14:04:36.182239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.392 [2024-07-25 14:04:36.182256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.392 qpair failed and we were unable to recover it. 00:36:39.392 [2024-07-25 14:04:36.192139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.392 [2024-07-25 14:04:36.192221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.392 [2024-07-25 14:04:36.192238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.392 [2024-07-25 14:04:36.192247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.393 [2024-07-25 14:04:36.192255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.393 [2024-07-25 14:04:36.192272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.393 qpair failed and we were unable to recover it. 00:36:39.393 [2024-07-25 14:04:36.202229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.393 [2024-07-25 14:04:36.202312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.393 [2024-07-25 14:04:36.202329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.393 [2024-07-25 14:04:36.202337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.393 [2024-07-25 14:04:36.202346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.393 [2024-07-25 14:04:36.202362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.393 qpair failed and we were unable to recover it. 00:36:39.393 [2024-07-25 14:04:36.212284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.393 [2024-07-25 14:04:36.212365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.393 [2024-07-25 14:04:36.212382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.393 [2024-07-25 14:04:36.212391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.393 [2024-07-25 14:04:36.212400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.393 [2024-07-25 14:04:36.212416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.393 qpair failed and we were unable to recover it. 00:36:39.393 [2024-07-25 14:04:36.222286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.393 [2024-07-25 14:04:36.222370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.393 [2024-07-25 14:04:36.222388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.393 [2024-07-25 14:04:36.222402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.393 [2024-07-25 14:04:36.222411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.393 [2024-07-25 14:04:36.222428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.393 qpair failed and we were unable to recover it. 00:36:39.393 [2024-07-25 14:04:36.232244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.393 [2024-07-25 14:04:36.232324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.393 [2024-07-25 14:04:36.232341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.393 [2024-07-25 14:04:36.232350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.393 [2024-07-25 14:04:36.232358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.393 [2024-07-25 14:04:36.232375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.393 qpair failed and we were unable to recover it. 00:36:39.393 [2024-07-25 14:04:36.242347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.393 [2024-07-25 14:04:36.242426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.393 [2024-07-25 14:04:36.242443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.393 [2024-07-25 14:04:36.242452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.393 [2024-07-25 14:04:36.242460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.393 [2024-07-25 14:04:36.242477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.393 qpair failed and we were unable to recover it. 00:36:39.393 [2024-07-25 14:04:36.252319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.393 [2024-07-25 14:04:36.252401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.393 [2024-07-25 14:04:36.252418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.393 [2024-07-25 14:04:36.252427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.393 [2024-07-25 14:04:36.252435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.393 [2024-07-25 14:04:36.252452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.393 qpair failed and we were unable to recover it. 00:36:39.393 [2024-07-25 14:04:36.262372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.393 [2024-07-25 14:04:36.262453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.393 [2024-07-25 14:04:36.262471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.393 [2024-07-25 14:04:36.262480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.393 [2024-07-25 14:04:36.262489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.393 [2024-07-25 14:04:36.262507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.393 qpair failed and we were unable to recover it. 00:36:39.393 [2024-07-25 14:04:36.272407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.393 [2024-07-25 14:04:36.272489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.393 [2024-07-25 14:04:36.272506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.393 [2024-07-25 14:04:36.272515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.393 [2024-07-25 14:04:36.272523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.393 [2024-07-25 14:04:36.272540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.393 qpair failed and we were unable to recover it. 00:36:39.654 [2024-07-25 14:04:36.282469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.654 [2024-07-25 14:04:36.282551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.654 [2024-07-25 14:04:36.282568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.654 [2024-07-25 14:04:36.282577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.654 [2024-07-25 14:04:36.282586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.654 [2024-07-25 14:04:36.282602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.654 qpair failed and we were unable to recover it. 00:36:39.654 [2024-07-25 14:04:36.292491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.654 [2024-07-25 14:04:36.292572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.654 [2024-07-25 14:04:36.292589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.654 [2024-07-25 14:04:36.292598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.654 [2024-07-25 14:04:36.292606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.654 [2024-07-25 14:04:36.292622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.654 qpair failed and we were unable to recover it. 00:36:39.654 [2024-07-25 14:04:36.302525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.654 [2024-07-25 14:04:36.302612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.654 [2024-07-25 14:04:36.302629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.654 [2024-07-25 14:04:36.302638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.654 [2024-07-25 14:04:36.302647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.654 [2024-07-25 14:04:36.302663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.654 qpair failed and we were unable to recover it. 00:36:39.654 [2024-07-25 14:04:36.312516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.654 [2024-07-25 14:04:36.312628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.654 [2024-07-25 14:04:36.312646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.654 [2024-07-25 14:04:36.312659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.654 [2024-07-25 14:04:36.312667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.654 [2024-07-25 14:04:36.312684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.654 qpair failed and we were unable to recover it. 00:36:39.654 [2024-07-25 14:04:36.322569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.654 [2024-07-25 14:04:36.322649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.654 [2024-07-25 14:04:36.322666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.654 [2024-07-25 14:04:36.322675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.654 [2024-07-25 14:04:36.322683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.654 [2024-07-25 14:04:36.322700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.654 qpair failed and we were unable to recover it. 00:36:39.654 [2024-07-25 14:04:36.332599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.654 [2024-07-25 14:04:36.332680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.654 [2024-07-25 14:04:36.332698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.654 [2024-07-25 14:04:36.332707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.654 [2024-07-25 14:04:36.332721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.654 [2024-07-25 14:04:36.332739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.654 qpair failed and we were unable to recover it. 00:36:39.654 [2024-07-25 14:04:36.342636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.654 [2024-07-25 14:04:36.342722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.654 [2024-07-25 14:04:36.342740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.654 [2024-07-25 14:04:36.342749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.654 [2024-07-25 14:04:36.342757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.654 [2024-07-25 14:04:36.342774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.654 qpair failed and we were unable to recover it. 00:36:39.654 [2024-07-25 14:04:36.352654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.654 [2024-07-25 14:04:36.352735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.654 [2024-07-25 14:04:36.352753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.655 [2024-07-25 14:04:36.352762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.655 [2024-07-25 14:04:36.352770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.655 [2024-07-25 14:04:36.352787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.655 qpair failed and we were unable to recover it. 00:36:39.655 [2024-07-25 14:04:36.362685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.655 [2024-07-25 14:04:36.362770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.655 [2024-07-25 14:04:36.362788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.655 [2024-07-25 14:04:36.362797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.655 [2024-07-25 14:04:36.362805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.655 [2024-07-25 14:04:36.362822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.655 qpair failed and we were unable to recover it. 00:36:39.655 [2024-07-25 14:04:36.372732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.655 [2024-07-25 14:04:36.372814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.655 [2024-07-25 14:04:36.372832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.655 [2024-07-25 14:04:36.372841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.655 [2024-07-25 14:04:36.372849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.655 [2024-07-25 14:04:36.372866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.655 qpair failed and we were unable to recover it. 00:36:39.655 [2024-07-25 14:04:36.382697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.655 [2024-07-25 14:04:36.382794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.655 [2024-07-25 14:04:36.382811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.655 [2024-07-25 14:04:36.382819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.655 [2024-07-25 14:04:36.382828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.655 [2024-07-25 14:04:36.382845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.655 qpair failed and we were unable to recover it. 00:36:39.655 [2024-07-25 14:04:36.392741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.655 [2024-07-25 14:04:36.392823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.655 [2024-07-25 14:04:36.392841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.655 [2024-07-25 14:04:36.392850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.655 [2024-07-25 14:04:36.392858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.655 [2024-07-25 14:04:36.392876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.655 qpair failed and we were unable to recover it. 00:36:39.655 [2024-07-25 14:04:36.402761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.655 [2024-07-25 14:04:36.402838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.655 [2024-07-25 14:04:36.402855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.655 [2024-07-25 14:04:36.402867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.655 [2024-07-25 14:04:36.402876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.655 [2024-07-25 14:04:36.402892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.655 qpair failed and we were unable to recover it. 00:36:39.655 [2024-07-25 14:04:36.412826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.655 [2024-07-25 14:04:36.412909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.655 [2024-07-25 14:04:36.412927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.655 [2024-07-25 14:04:36.412936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.655 [2024-07-25 14:04:36.412944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.655 [2024-07-25 14:04:36.412961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.655 qpair failed and we were unable to recover it. 00:36:39.655 [2024-07-25 14:04:36.422877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.655 [2024-07-25 14:04:36.422962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.655 [2024-07-25 14:04:36.422979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.655 [2024-07-25 14:04:36.422988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.655 [2024-07-25 14:04:36.422997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.655 [2024-07-25 14:04:36.423013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.655 qpair failed and we were unable to recover it. 00:36:39.655 [2024-07-25 14:04:36.432866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.655 [2024-07-25 14:04:36.432947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.655 [2024-07-25 14:04:36.432965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.655 [2024-07-25 14:04:36.432974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.655 [2024-07-25 14:04:36.432982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.655 [2024-07-25 14:04:36.432999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.655 qpair failed and we were unable to recover it. 00:36:39.655 [2024-07-25 14:04:36.443006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.655 [2024-07-25 14:04:36.443104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.655 [2024-07-25 14:04:36.443121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.655 [2024-07-25 14:04:36.443130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.655 [2024-07-25 14:04:36.443138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.655 [2024-07-25 14:04:36.443155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.655 qpair failed and we were unable to recover it. 00:36:39.655 [2024-07-25 14:04:36.452919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.655 [2024-07-25 14:04:36.453002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.655 [2024-07-25 14:04:36.453019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.655 [2024-07-25 14:04:36.453029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.655 [2024-07-25 14:04:36.453037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.655 [2024-07-25 14:04:36.453054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.655 qpair failed and we were unable to recover it. 00:36:39.655 [2024-07-25 14:04:36.463014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.655 [2024-07-25 14:04:36.463104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.655 [2024-07-25 14:04:36.463121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.655 [2024-07-25 14:04:36.463130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.655 [2024-07-25 14:04:36.463138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.655 [2024-07-25 14:04:36.463155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.655 qpair failed and we were unable to recover it. 00:36:39.655 [2024-07-25 14:04:36.472979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.655 [2024-07-25 14:04:36.473058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.655 [2024-07-25 14:04:36.473076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.655 [2024-07-25 14:04:36.473085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.655 [2024-07-25 14:04:36.473094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.655 [2024-07-25 14:04:36.473111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.655 qpair failed and we were unable to recover it. 00:36:39.655 [2024-07-25 14:04:36.482995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.655 [2024-07-25 14:04:36.483166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.655 [2024-07-25 14:04:36.483184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.655 [2024-07-25 14:04:36.483194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.655 [2024-07-25 14:04:36.483202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.656 [2024-07-25 14:04:36.483220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.656 qpair failed and we were unable to recover it. 00:36:39.656 [2024-07-25 14:04:36.492975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.656 [2024-07-25 14:04:36.493058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.656 [2024-07-25 14:04:36.493078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.656 [2024-07-25 14:04:36.493087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.656 [2024-07-25 14:04:36.493096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.656 [2024-07-25 14:04:36.493113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.656 qpair failed and we were unable to recover it. 00:36:39.656 [2024-07-25 14:04:36.503119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.656 [2024-07-25 14:04:36.503199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.656 [2024-07-25 14:04:36.503217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.656 [2024-07-25 14:04:36.503226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.656 [2024-07-25 14:04:36.503234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.656 [2024-07-25 14:04:36.503252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.656 qpair failed and we were unable to recover it. 00:36:39.656 [2024-07-25 14:04:36.513096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.656 [2024-07-25 14:04:36.513174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.656 [2024-07-25 14:04:36.513191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.656 [2024-07-25 14:04:36.513200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.656 [2024-07-25 14:04:36.513208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.656 [2024-07-25 14:04:36.513225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.656 qpair failed and we were unable to recover it. 00:36:39.656 [2024-07-25 14:04:36.523124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.656 [2024-07-25 14:04:36.523205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.656 [2024-07-25 14:04:36.523222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.656 [2024-07-25 14:04:36.523231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.656 [2024-07-25 14:04:36.523240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.656 [2024-07-25 14:04:36.523257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.656 qpair failed and we were unable to recover it. 00:36:39.656 [2024-07-25 14:04:36.533154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.656 [2024-07-25 14:04:36.533237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.656 [2024-07-25 14:04:36.533254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.656 [2024-07-25 14:04:36.533264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.656 [2024-07-25 14:04:36.533272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.656 [2024-07-25 14:04:36.533289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.656 qpair failed and we were unable to recover it. 00:36:39.916 [2024-07-25 14:04:36.543185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.916 [2024-07-25 14:04:36.543273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.916 [2024-07-25 14:04:36.543290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.916 [2024-07-25 14:04:36.543299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.916 [2024-07-25 14:04:36.543307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.916 [2024-07-25 14:04:36.543325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.916 qpair failed and we were unable to recover it. 00:36:39.916 [2024-07-25 14:04:36.553211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.916 [2024-07-25 14:04:36.553292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.916 [2024-07-25 14:04:36.553312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.916 [2024-07-25 14:04:36.553321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.916 [2024-07-25 14:04:36.553329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.916 [2024-07-25 14:04:36.553347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.916 qpair failed and we were unable to recover it. 00:36:39.916 [2024-07-25 14:04:36.563241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.916 [2024-07-25 14:04:36.563412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.916 [2024-07-25 14:04:36.563431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.916 [2024-07-25 14:04:36.563440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.916 [2024-07-25 14:04:36.563449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.916 [2024-07-25 14:04:36.563467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.916 qpair failed and we were unable to recover it. 00:36:39.916 [2024-07-25 14:04:36.573274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.916 [2024-07-25 14:04:36.573358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.916 [2024-07-25 14:04:36.573376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.916 [2024-07-25 14:04:36.573385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.916 [2024-07-25 14:04:36.573393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.916 [2024-07-25 14:04:36.573411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.916 qpair failed and we were unable to recover it. 00:36:39.916 [2024-07-25 14:04:36.583290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.916 [2024-07-25 14:04:36.583369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.916 [2024-07-25 14:04:36.583390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.916 [2024-07-25 14:04:36.583399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.916 [2024-07-25 14:04:36.583408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.916 [2024-07-25 14:04:36.583425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.916 qpair failed and we were unable to recover it. 00:36:39.916 [2024-07-25 14:04:36.593330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.916 [2024-07-25 14:04:36.593417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.916 [2024-07-25 14:04:36.593435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.916 [2024-07-25 14:04:36.593444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.916 [2024-07-25 14:04:36.593452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.916 [2024-07-25 14:04:36.593469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.916 qpair failed and we were unable to recover it. 00:36:39.916 [2024-07-25 14:04:36.603351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.916 [2024-07-25 14:04:36.603431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.916 [2024-07-25 14:04:36.603448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.916 [2024-07-25 14:04:36.603457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.916 [2024-07-25 14:04:36.603466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.916 [2024-07-25 14:04:36.603482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.916 qpair failed and we were unable to recover it. 00:36:39.916 [2024-07-25 14:04:36.613379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.916 [2024-07-25 14:04:36.613458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.916 [2024-07-25 14:04:36.613475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.916 [2024-07-25 14:04:36.613484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.916 [2024-07-25 14:04:36.613493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.916 [2024-07-25 14:04:36.613509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.916 qpair failed and we were unable to recover it. 00:36:39.916 [2024-07-25 14:04:36.623398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.916 [2024-07-25 14:04:36.623478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.916 [2024-07-25 14:04:36.623495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.916 [2024-07-25 14:04:36.623504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.916 [2024-07-25 14:04:36.623513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.916 [2024-07-25 14:04:36.623532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.916 qpair failed and we were unable to recover it. 00:36:39.916 [2024-07-25 14:04:36.633431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.916 [2024-07-25 14:04:36.633511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.917 [2024-07-25 14:04:36.633529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.917 [2024-07-25 14:04:36.633538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.917 [2024-07-25 14:04:36.633546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.917 [2024-07-25 14:04:36.633563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.917 qpair failed and we were unable to recover it. 00:36:39.917 [2024-07-25 14:04:36.643459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.917 [2024-07-25 14:04:36.643542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.917 [2024-07-25 14:04:36.643559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.917 [2024-07-25 14:04:36.643568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.917 [2024-07-25 14:04:36.643576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.917 [2024-07-25 14:04:36.643594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.917 qpair failed and we were unable to recover it. 00:36:39.917 [2024-07-25 14:04:36.653489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.917 [2024-07-25 14:04:36.653571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.917 [2024-07-25 14:04:36.653588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.917 [2024-07-25 14:04:36.653597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.917 [2024-07-25 14:04:36.653605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.917 [2024-07-25 14:04:36.653622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.917 qpair failed and we were unable to recover it. 00:36:39.917 [2024-07-25 14:04:36.663519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.917 [2024-07-25 14:04:36.663603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.917 [2024-07-25 14:04:36.663620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.917 [2024-07-25 14:04:36.663629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.917 [2024-07-25 14:04:36.663637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.917 [2024-07-25 14:04:36.663655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.917 qpair failed and we were unable to recover it. 00:36:39.917 [2024-07-25 14:04:36.673537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.917 [2024-07-25 14:04:36.673618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.917 [2024-07-25 14:04:36.673638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.917 [2024-07-25 14:04:36.673647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.917 [2024-07-25 14:04:36.673655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.917 [2024-07-25 14:04:36.673673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.917 qpair failed and we were unable to recover it. 00:36:39.917 [2024-07-25 14:04:36.683583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.917 [2024-07-25 14:04:36.683665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.917 [2024-07-25 14:04:36.683682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.917 [2024-07-25 14:04:36.683691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.917 [2024-07-25 14:04:36.683700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.917 [2024-07-25 14:04:36.683720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.917 qpair failed and we were unable to recover it. 00:36:39.917 [2024-07-25 14:04:36.693608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.917 [2024-07-25 14:04:36.693691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.917 [2024-07-25 14:04:36.693708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.917 [2024-07-25 14:04:36.693720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.917 [2024-07-25 14:04:36.693729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.917 [2024-07-25 14:04:36.693746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.917 qpair failed and we were unable to recover it. 00:36:39.917 [2024-07-25 14:04:36.703639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.917 [2024-07-25 14:04:36.703724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.917 [2024-07-25 14:04:36.703741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.917 [2024-07-25 14:04:36.703751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.917 [2024-07-25 14:04:36.703759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.917 [2024-07-25 14:04:36.703776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.917 qpair failed and we were unable to recover it. 00:36:39.917 [2024-07-25 14:04:36.713665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.917 [2024-07-25 14:04:36.713759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.917 [2024-07-25 14:04:36.713776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.917 [2024-07-25 14:04:36.713785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.917 [2024-07-25 14:04:36.713794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.917 [2024-07-25 14:04:36.713814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.917 qpair failed and we were unable to recover it. 00:36:39.917 [2024-07-25 14:04:36.723635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.917 [2024-07-25 14:04:36.723722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.917 [2024-07-25 14:04:36.723739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.917 [2024-07-25 14:04:36.723749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.917 [2024-07-25 14:04:36.723757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.917 [2024-07-25 14:04:36.723775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.917 qpair failed and we were unable to recover it. 00:36:39.917 [2024-07-25 14:04:36.733733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.917 [2024-07-25 14:04:36.733812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.917 [2024-07-25 14:04:36.733829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.917 [2024-07-25 14:04:36.733838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.917 [2024-07-25 14:04:36.733847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.917 [2024-07-25 14:04:36.733864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.917 qpair failed and we were unable to recover it. 00:36:39.917 [2024-07-25 14:04:36.743760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.917 [2024-07-25 14:04:36.743840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.917 [2024-07-25 14:04:36.743858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.917 [2024-07-25 14:04:36.743866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.917 [2024-07-25 14:04:36.743875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.917 [2024-07-25 14:04:36.743892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.917 qpair failed and we were unable to recover it. 00:36:39.917 [2024-07-25 14:04:36.753785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.917 [2024-07-25 14:04:36.753864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.917 [2024-07-25 14:04:36.753881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.917 [2024-07-25 14:04:36.753890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.917 [2024-07-25 14:04:36.753898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.917 [2024-07-25 14:04:36.753915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.917 qpair failed and we were unable to recover it. 00:36:39.917 [2024-07-25 14:04:36.763802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.917 [2024-07-25 14:04:36.763970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.917 [2024-07-25 14:04:36.763991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.918 [2024-07-25 14:04:36.764000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.918 [2024-07-25 14:04:36.764009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.918 [2024-07-25 14:04:36.764027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.918 qpair failed and we were unable to recover it. 00:36:39.918 [2024-07-25 14:04:36.773818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.918 [2024-07-25 14:04:36.773900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.918 [2024-07-25 14:04:36.773918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.918 [2024-07-25 14:04:36.773927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.918 [2024-07-25 14:04:36.773935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.918 [2024-07-25 14:04:36.773951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.918 qpair failed and we were unable to recover it. 00:36:39.918 [2024-07-25 14:04:36.783868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.918 [2024-07-25 14:04:36.783962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.918 [2024-07-25 14:04:36.783980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.918 [2024-07-25 14:04:36.783989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.918 [2024-07-25 14:04:36.783997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.918 [2024-07-25 14:04:36.784014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.918 qpair failed and we were unable to recover it. 00:36:39.918 [2024-07-25 14:04:36.793916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:39.918 [2024-07-25 14:04:36.793995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:39.918 [2024-07-25 14:04:36.794012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:39.918 [2024-07-25 14:04:36.794021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:39.918 [2024-07-25 14:04:36.794029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:39.918 [2024-07-25 14:04:36.794046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.918 qpair failed and we were unable to recover it. 00:36:40.178 [2024-07-25 14:04:36.803933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.178 [2024-07-25 14:04:36.804013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.178 [2024-07-25 14:04:36.804030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.178 [2024-07-25 14:04:36.804039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.178 [2024-07-25 14:04:36.804048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.178 [2024-07-25 14:04:36.804068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.178 qpair failed and we were unable to recover it. 00:36:40.178 [2024-07-25 14:04:36.813946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.178 [2024-07-25 14:04:36.814029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.178 [2024-07-25 14:04:36.814046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.178 [2024-07-25 14:04:36.814055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.178 [2024-07-25 14:04:36.814063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.178 [2024-07-25 14:04:36.814080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.178 qpair failed and we were unable to recover it. 00:36:40.178 [2024-07-25 14:04:36.823998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.178 [2024-07-25 14:04:36.824079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.178 [2024-07-25 14:04:36.824096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.178 [2024-07-25 14:04:36.824105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.178 [2024-07-25 14:04:36.824113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.178 [2024-07-25 14:04:36.824129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.178 qpair failed and we were unable to recover it. 00:36:40.178 [2024-07-25 14:04:36.834027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.178 [2024-07-25 14:04:36.834106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.178 [2024-07-25 14:04:36.834123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.178 [2024-07-25 14:04:36.834132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.178 [2024-07-25 14:04:36.834140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.178 [2024-07-25 14:04:36.834157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.178 qpair failed and we were unable to recover it. 00:36:40.178 [2024-07-25 14:04:36.844061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.178 [2024-07-25 14:04:36.844145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.178 [2024-07-25 14:04:36.844163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.178 [2024-07-25 14:04:36.844171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.178 [2024-07-25 14:04:36.844180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.178 [2024-07-25 14:04:36.844197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.178 qpair failed and we were unable to recover it. 00:36:40.178 [2024-07-25 14:04:36.854086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.178 [2024-07-25 14:04:36.854169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.178 [2024-07-25 14:04:36.854189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.178 [2024-07-25 14:04:36.854198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.178 [2024-07-25 14:04:36.854207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.178 [2024-07-25 14:04:36.854224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.178 qpair failed and we were unable to recover it. 00:36:40.178 [2024-07-25 14:04:36.864108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.178 [2024-07-25 14:04:36.864187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.178 [2024-07-25 14:04:36.864204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.178 [2024-07-25 14:04:36.864213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.178 [2024-07-25 14:04:36.864222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.178 [2024-07-25 14:04:36.864239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.178 qpair failed and we were unable to recover it. 00:36:40.178 [2024-07-25 14:04:36.874143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.178 [2024-07-25 14:04:36.874223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.178 [2024-07-25 14:04:36.874240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.178 [2024-07-25 14:04:36.874249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.178 [2024-07-25 14:04:36.874257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.178 [2024-07-25 14:04:36.874274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.178 qpair failed and we were unable to recover it. 00:36:40.178 [2024-07-25 14:04:36.884172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.178 [2024-07-25 14:04:36.884252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.178 [2024-07-25 14:04:36.884269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.178 [2024-07-25 14:04:36.884278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.178 [2024-07-25 14:04:36.884286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.178 [2024-07-25 14:04:36.884303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.178 qpair failed and we were unable to recover it. 00:36:40.178 [2024-07-25 14:04:36.894207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.178 [2024-07-25 14:04:36.894287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.178 [2024-07-25 14:04:36.894304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.178 [2024-07-25 14:04:36.894314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.178 [2024-07-25 14:04:36.894328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.178 [2024-07-25 14:04:36.894346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.178 qpair failed and we were unable to recover it. 00:36:40.178 [2024-07-25 14:04:36.904273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.178 [2024-07-25 14:04:36.904382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.178 [2024-07-25 14:04:36.904400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.178 [2024-07-25 14:04:36.904409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.178 [2024-07-25 14:04:36.904417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.178 [2024-07-25 14:04:36.904435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.178 qpair failed and we were unable to recover it. 00:36:40.178 [2024-07-25 14:04:36.914247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.178 [2024-07-25 14:04:36.914329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.178 [2024-07-25 14:04:36.914346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.178 [2024-07-25 14:04:36.914355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.178 [2024-07-25 14:04:36.914363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.179 [2024-07-25 14:04:36.914380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.179 qpair failed and we were unable to recover it. 00:36:40.179 [2024-07-25 14:04:36.924219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.179 [2024-07-25 14:04:36.924304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.179 [2024-07-25 14:04:36.924321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.179 [2024-07-25 14:04:36.924330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.179 [2024-07-25 14:04:36.924338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.179 [2024-07-25 14:04:36.924355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.179 qpair failed and we were unable to recover it. 00:36:40.179 [2024-07-25 14:04:36.934251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.179 [2024-07-25 14:04:36.934333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.179 [2024-07-25 14:04:36.934350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.179 [2024-07-25 14:04:36.934359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.179 [2024-07-25 14:04:36.934367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.179 [2024-07-25 14:04:36.934384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.179 qpair failed and we were unable to recover it. 00:36:40.179 [2024-07-25 14:04:36.944342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.179 [2024-07-25 14:04:36.944425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.179 [2024-07-25 14:04:36.944442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.179 [2024-07-25 14:04:36.944451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.179 [2024-07-25 14:04:36.944459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.179 [2024-07-25 14:04:36.944477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.179 qpair failed and we were unable to recover it. 00:36:40.179 [2024-07-25 14:04:36.954344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.179 [2024-07-25 14:04:36.954425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.179 [2024-07-25 14:04:36.954442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.179 [2024-07-25 14:04:36.954451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.179 [2024-07-25 14:04:36.954459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.179 [2024-07-25 14:04:36.954477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.179 qpair failed and we were unable to recover it. 00:36:40.179 [2024-07-25 14:04:36.964421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.179 [2024-07-25 14:04:36.964499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.179 [2024-07-25 14:04:36.964518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.179 [2024-07-25 14:04:36.964527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.179 [2024-07-25 14:04:36.964536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.179 [2024-07-25 14:04:36.964553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.179 qpair failed and we were unable to recover it. 00:36:40.179 [2024-07-25 14:04:36.974466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.179 [2024-07-25 14:04:36.974545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.179 [2024-07-25 14:04:36.974563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.179 [2024-07-25 14:04:36.974572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.179 [2024-07-25 14:04:36.974580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.179 [2024-07-25 14:04:36.974598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.179 qpair failed and we were unable to recover it. 00:36:40.179 [2024-07-25 14:04:36.984438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.179 [2024-07-25 14:04:36.984520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.179 [2024-07-25 14:04:36.984537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.179 [2024-07-25 14:04:36.984546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.179 [2024-07-25 14:04:36.984558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.179 [2024-07-25 14:04:36.984575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.179 qpair failed and we were unable to recover it. 00:36:40.179 [2024-07-25 14:04:36.994466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.179 [2024-07-25 14:04:36.994548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.179 [2024-07-25 14:04:36.994565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.179 [2024-07-25 14:04:36.994574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.179 [2024-07-25 14:04:36.994582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.179 [2024-07-25 14:04:36.994599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.179 qpair failed and we were unable to recover it. 00:36:40.179 [2024-07-25 14:04:37.004493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.179 [2024-07-25 14:04:37.004583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.179 [2024-07-25 14:04:37.004600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.179 [2024-07-25 14:04:37.004609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.179 [2024-07-25 14:04:37.004618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.179 [2024-07-25 14:04:37.004635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.179 qpair failed and we were unable to recover it. 00:36:40.179 [2024-07-25 14:04:37.014554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.179 [2024-07-25 14:04:37.014632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.179 [2024-07-25 14:04:37.014649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.179 [2024-07-25 14:04:37.014658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.179 [2024-07-25 14:04:37.014666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.179 [2024-07-25 14:04:37.014683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.179 qpair failed and we were unable to recover it. 00:36:40.179 [2024-07-25 14:04:37.024498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.179 [2024-07-25 14:04:37.024582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.179 [2024-07-25 14:04:37.024599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.179 [2024-07-25 14:04:37.024608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.179 [2024-07-25 14:04:37.024616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.179 [2024-07-25 14:04:37.024633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.179 qpair failed and we were unable to recover it. 00:36:40.179 [2024-07-25 14:04:37.034637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.179 [2024-07-25 14:04:37.034722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.179 [2024-07-25 14:04:37.034739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.179 [2024-07-25 14:04:37.034748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.179 [2024-07-25 14:04:37.034756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.179 [2024-07-25 14:04:37.034774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.179 qpair failed and we were unable to recover it. 00:36:40.179 [2024-07-25 14:04:37.044621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.179 [2024-07-25 14:04:37.044705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.179 [2024-07-25 14:04:37.044726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.179 [2024-07-25 14:04:37.044735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.179 [2024-07-25 14:04:37.044743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.179 [2024-07-25 14:04:37.044760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.180 qpair failed and we were unable to recover it. 00:36:40.180 [2024-07-25 14:04:37.054654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.180 [2024-07-25 14:04:37.054740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.180 [2024-07-25 14:04:37.054757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.180 [2024-07-25 14:04:37.054766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.180 [2024-07-25 14:04:37.054774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.180 [2024-07-25 14:04:37.054791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.180 qpair failed and we were unable to recover it. 00:36:40.180 [2024-07-25 14:04:37.064603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.180 [2024-07-25 14:04:37.064684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.180 [2024-07-25 14:04:37.064702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.180 [2024-07-25 14:04:37.064711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.180 [2024-07-25 14:04:37.064725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.180 [2024-07-25 14:04:37.064742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.180 qpair failed and we were unable to recover it. 00:36:40.440 [2024-07-25 14:04:37.074703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.440 [2024-07-25 14:04:37.074802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.440 [2024-07-25 14:04:37.074819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.440 [2024-07-25 14:04:37.074829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.440 [2024-07-25 14:04:37.074841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.440 [2024-07-25 14:04:37.074858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.440 qpair failed and we were unable to recover it. 00:36:40.440 [2024-07-25 14:04:37.084748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.440 [2024-07-25 14:04:37.084831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.440 [2024-07-25 14:04:37.084848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.440 [2024-07-25 14:04:37.084857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.440 [2024-07-25 14:04:37.084865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.440 [2024-07-25 14:04:37.084882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.440 qpair failed and we were unable to recover it. 00:36:40.440 [2024-07-25 14:04:37.094785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.440 [2024-07-25 14:04:37.094866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.440 [2024-07-25 14:04:37.094883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.440 [2024-07-25 14:04:37.094892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.440 [2024-07-25 14:04:37.094901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.440 [2024-07-25 14:04:37.094918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.440 qpair failed and we were unable to recover it. 00:36:40.440 [2024-07-25 14:04:37.104762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.440 [2024-07-25 14:04:37.104845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.440 [2024-07-25 14:04:37.104862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.440 [2024-07-25 14:04:37.104871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.440 [2024-07-25 14:04:37.104879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.440 [2024-07-25 14:04:37.104896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.440 qpair failed and we were unable to recover it. 00:36:40.440 [2024-07-25 14:04:37.114804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.440 [2024-07-25 14:04:37.114885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.440 [2024-07-25 14:04:37.114902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.440 [2024-07-25 14:04:37.114911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.440 [2024-07-25 14:04:37.114919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.440 [2024-07-25 14:04:37.114936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.441 qpair failed and we were unable to recover it. 00:36:40.441 [2024-07-25 14:04:37.124846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.441 [2024-07-25 14:04:37.124926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.441 [2024-07-25 14:04:37.124944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.441 [2024-07-25 14:04:37.124953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.441 [2024-07-25 14:04:37.124961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.441 [2024-07-25 14:04:37.124978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.441 qpair failed and we were unable to recover it. 00:36:40.441 [2024-07-25 14:04:37.134877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.441 [2024-07-25 14:04:37.134955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.441 [2024-07-25 14:04:37.134973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.441 [2024-07-25 14:04:37.134982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.441 [2024-07-25 14:04:37.134990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.441 [2024-07-25 14:04:37.135007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.441 qpair failed and we were unable to recover it. 00:36:40.441 [2024-07-25 14:04:37.144906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.441 [2024-07-25 14:04:37.144986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.441 [2024-07-25 14:04:37.145003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.441 [2024-07-25 14:04:37.145012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.441 [2024-07-25 14:04:37.145020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.441 [2024-07-25 14:04:37.145037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.441 qpair failed and we were unable to recover it. 00:36:40.441 [2024-07-25 14:04:37.154920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.441 [2024-07-25 14:04:37.155005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.441 [2024-07-25 14:04:37.155023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.441 [2024-07-25 14:04:37.155031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.441 [2024-07-25 14:04:37.155040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.441 [2024-07-25 14:04:37.155057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.441 qpair failed and we were unable to recover it. 00:36:40.441 [2024-07-25 14:04:37.164970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.441 [2024-07-25 14:04:37.165050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.441 [2024-07-25 14:04:37.165067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.441 [2024-07-25 14:04:37.165079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.441 [2024-07-25 14:04:37.165087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.441 [2024-07-25 14:04:37.165104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.441 qpair failed and we were unable to recover it. 00:36:40.441 [2024-07-25 14:04:37.174982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.441 [2024-07-25 14:04:37.175064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.441 [2024-07-25 14:04:37.175081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.441 [2024-07-25 14:04:37.175090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.441 [2024-07-25 14:04:37.175098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.441 [2024-07-25 14:04:37.175115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.441 qpair failed and we were unable to recover it. 00:36:40.441 [2024-07-25 14:04:37.185034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.441 [2024-07-25 14:04:37.185115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.441 [2024-07-25 14:04:37.185132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.441 [2024-07-25 14:04:37.185141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.441 [2024-07-25 14:04:37.185149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.441 [2024-07-25 14:04:37.185166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.441 qpair failed and we were unable to recover it. 00:36:40.441 [2024-07-25 14:04:37.195146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.441 [2024-07-25 14:04:37.195223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.441 [2024-07-25 14:04:37.195241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.441 [2024-07-25 14:04:37.195250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.441 [2024-07-25 14:04:37.195258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.441 [2024-07-25 14:04:37.195276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.441 qpair failed and we were unable to recover it. 00:36:40.441 [2024-07-25 14:04:37.205090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.441 [2024-07-25 14:04:37.205173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.441 [2024-07-25 14:04:37.205190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.441 [2024-07-25 14:04:37.205199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.441 [2024-07-25 14:04:37.205207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.441 [2024-07-25 14:04:37.205224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.441 qpair failed and we were unable to recover it. 00:36:40.441 [2024-07-25 14:04:37.215133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.441 [2024-07-25 14:04:37.215213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.441 [2024-07-25 14:04:37.215231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.441 [2024-07-25 14:04:37.215240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.441 [2024-07-25 14:04:37.215248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.441 [2024-07-25 14:04:37.215264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.441 qpair failed and we were unable to recover it. 00:36:40.441 [2024-07-25 14:04:37.225111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.441 [2024-07-25 14:04:37.225212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.441 [2024-07-25 14:04:37.225229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.441 [2024-07-25 14:04:37.225238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.441 [2024-07-25 14:04:37.225247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.441 [2024-07-25 14:04:37.225264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.441 qpair failed and we were unable to recover it. 00:36:40.441 [2024-07-25 14:04:37.235186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.441 [2024-07-25 14:04:37.235265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.441 [2024-07-25 14:04:37.235283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.441 [2024-07-25 14:04:37.235292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.441 [2024-07-25 14:04:37.235300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.441 [2024-07-25 14:04:37.235317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.441 qpair failed and we were unable to recover it. 00:36:40.441 [2024-07-25 14:04:37.245204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.441 [2024-07-25 14:04:37.245286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.441 [2024-07-25 14:04:37.245303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.441 [2024-07-25 14:04:37.245312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.442 [2024-07-25 14:04:37.245320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.442 [2024-07-25 14:04:37.245336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.442 qpair failed and we were unable to recover it. 00:36:40.442 [2024-07-25 14:04:37.255236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.442 [2024-07-25 14:04:37.255316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.442 [2024-07-25 14:04:37.255333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.442 [2024-07-25 14:04:37.255345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.442 [2024-07-25 14:04:37.255354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.442 [2024-07-25 14:04:37.255371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.442 qpair failed and we were unable to recover it. 00:36:40.442 [2024-07-25 14:04:37.265263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.442 [2024-07-25 14:04:37.265342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.442 [2024-07-25 14:04:37.265361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.442 [2024-07-25 14:04:37.265370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.442 [2024-07-25 14:04:37.265379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.442 [2024-07-25 14:04:37.265396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.442 qpair failed and we were unable to recover it. 00:36:40.442 [2024-07-25 14:04:37.275209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.442 [2024-07-25 14:04:37.275292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.442 [2024-07-25 14:04:37.275309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.442 [2024-07-25 14:04:37.275318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.442 [2024-07-25 14:04:37.275326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.442 [2024-07-25 14:04:37.275342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.442 qpair failed and we were unable to recover it. 00:36:40.442 [2024-07-25 14:04:37.285260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.442 [2024-07-25 14:04:37.285340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.442 [2024-07-25 14:04:37.285357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.442 [2024-07-25 14:04:37.285366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.442 [2024-07-25 14:04:37.285375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.442 [2024-07-25 14:04:37.285392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.442 qpair failed and we were unable to recover it. 00:36:40.442 [2024-07-25 14:04:37.295387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.442 [2024-07-25 14:04:37.295493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.442 [2024-07-25 14:04:37.295510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.442 [2024-07-25 14:04:37.295520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.442 [2024-07-25 14:04:37.295529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.442 [2024-07-25 14:04:37.295545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.442 qpair failed and we were unable to recover it. 00:36:40.442 [2024-07-25 14:04:37.305349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.442 [2024-07-25 14:04:37.305428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.442 [2024-07-25 14:04:37.305446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.442 [2024-07-25 14:04:37.305455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.442 [2024-07-25 14:04:37.305463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.442 [2024-07-25 14:04:37.305481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.442 qpair failed and we were unable to recover it. 00:36:40.442 [2024-07-25 14:04:37.315377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.442 [2024-07-25 14:04:37.315457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.442 [2024-07-25 14:04:37.315475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.442 [2024-07-25 14:04:37.315485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.442 [2024-07-25 14:04:37.315494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.442 [2024-07-25 14:04:37.315511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.442 qpair failed and we were unable to recover it. 00:36:40.442 [2024-07-25 14:04:37.325426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.442 [2024-07-25 14:04:37.325506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.442 [2024-07-25 14:04:37.325523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.442 [2024-07-25 14:04:37.325533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.442 [2024-07-25 14:04:37.325541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.442 [2024-07-25 14:04:37.325559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.442 qpair failed and we were unable to recover it. 00:36:40.702 [2024-07-25 14:04:37.335464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.702 [2024-07-25 14:04:37.335545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.702 [2024-07-25 14:04:37.335563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.702 [2024-07-25 14:04:37.335572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.702 [2024-07-25 14:04:37.335581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.702 [2024-07-25 14:04:37.335598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.702 qpair failed and we were unable to recover it. 00:36:40.702 [2024-07-25 14:04:37.345475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.702 [2024-07-25 14:04:37.345556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.702 [2024-07-25 14:04:37.345574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.702 [2024-07-25 14:04:37.345586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.702 [2024-07-25 14:04:37.345594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.702 [2024-07-25 14:04:37.345611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.702 qpair failed and we were unable to recover it. 00:36:40.702 [2024-07-25 14:04:37.355548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.702 [2024-07-25 14:04:37.355656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.702 [2024-07-25 14:04:37.355682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.702 [2024-07-25 14:04:37.355691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.702 [2024-07-25 14:04:37.355700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.702 [2024-07-25 14:04:37.355727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.702 qpair failed and we were unable to recover it. 00:36:40.702 [2024-07-25 14:04:37.365526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.702 [2024-07-25 14:04:37.365607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.702 [2024-07-25 14:04:37.365624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.702 [2024-07-25 14:04:37.365633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.702 [2024-07-25 14:04:37.365642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.702 [2024-07-25 14:04:37.365659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.702 qpair failed and we were unable to recover it. 00:36:40.702 [2024-07-25 14:04:37.375573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.702 [2024-07-25 14:04:37.375655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.702 [2024-07-25 14:04:37.375672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.702 [2024-07-25 14:04:37.375680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.702 [2024-07-25 14:04:37.375689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.702 [2024-07-25 14:04:37.375706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.702 qpair failed and we were unable to recover it. 00:36:40.702 [2024-07-25 14:04:37.385588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.702 [2024-07-25 14:04:37.385669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.702 [2024-07-25 14:04:37.385686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.702 [2024-07-25 14:04:37.385696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.702 [2024-07-25 14:04:37.385704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.702 [2024-07-25 14:04:37.385724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.702 qpair failed and we were unable to recover it. 00:36:40.702 [2024-07-25 14:04:37.395620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.702 [2024-07-25 14:04:37.395699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.702 [2024-07-25 14:04:37.395721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.702 [2024-07-25 14:04:37.395731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.702 [2024-07-25 14:04:37.395740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.703 [2024-07-25 14:04:37.395757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.703 qpair failed and we were unable to recover it. 00:36:40.703 [2024-07-25 14:04:37.405636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.703 [2024-07-25 14:04:37.405803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.703 [2024-07-25 14:04:37.405822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.703 [2024-07-25 14:04:37.405831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.703 [2024-07-25 14:04:37.405839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.703 [2024-07-25 14:04:37.405857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.703 qpair failed and we were unable to recover it. 00:36:40.703 [2024-07-25 14:04:37.415694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.703 [2024-07-25 14:04:37.415777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.703 [2024-07-25 14:04:37.415795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.703 [2024-07-25 14:04:37.415804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.703 [2024-07-25 14:04:37.415813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.703 [2024-07-25 14:04:37.415830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.703 qpair failed and we were unable to recover it. 00:36:40.703 [2024-07-25 14:04:37.425691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.703 [2024-07-25 14:04:37.425777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.703 [2024-07-25 14:04:37.425794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.703 [2024-07-25 14:04:37.425803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.703 [2024-07-25 14:04:37.425812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.703 [2024-07-25 14:04:37.425829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.703 qpair failed and we were unable to recover it. 00:36:40.703 [2024-07-25 14:04:37.435745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.703 [2024-07-25 14:04:37.435825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.703 [2024-07-25 14:04:37.435845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.703 [2024-07-25 14:04:37.435854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.703 [2024-07-25 14:04:37.435862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.703 [2024-07-25 14:04:37.435879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.703 qpair failed and we were unable to recover it. 00:36:40.703 [2024-07-25 14:04:37.445781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.703 [2024-07-25 14:04:37.445862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.703 [2024-07-25 14:04:37.445879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.703 [2024-07-25 14:04:37.445888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.703 [2024-07-25 14:04:37.445897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.703 [2024-07-25 14:04:37.445914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.703 qpair failed and we were unable to recover it. 00:36:40.703 [2024-07-25 14:04:37.455805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.703 [2024-07-25 14:04:37.455885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.703 [2024-07-25 14:04:37.455903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.703 [2024-07-25 14:04:37.455912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.703 [2024-07-25 14:04:37.455920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.703 [2024-07-25 14:04:37.455937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.703 qpair failed and we were unable to recover it. 00:36:40.703 [2024-07-25 14:04:37.465801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.703 [2024-07-25 14:04:37.465878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.703 [2024-07-25 14:04:37.465895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.703 [2024-07-25 14:04:37.465904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.703 [2024-07-25 14:04:37.465913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.703 [2024-07-25 14:04:37.465930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.703 qpair failed and we were unable to recover it. 00:36:40.703 [2024-07-25 14:04:37.475854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.703 [2024-07-25 14:04:37.475934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.703 [2024-07-25 14:04:37.475951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.703 [2024-07-25 14:04:37.475960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.703 [2024-07-25 14:04:37.475968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.703 [2024-07-25 14:04:37.475986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.703 qpair failed and we were unable to recover it. 00:36:40.703 [2024-07-25 14:04:37.485870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.703 [2024-07-25 14:04:37.485948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.703 [2024-07-25 14:04:37.485965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.703 [2024-07-25 14:04:37.485974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.703 [2024-07-25 14:04:37.485983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.703 [2024-07-25 14:04:37.485999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.703 qpair failed and we were unable to recover it. 00:36:40.703 [2024-07-25 14:04:37.495922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.703 [2024-07-25 14:04:37.496003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.703 [2024-07-25 14:04:37.496020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.703 [2024-07-25 14:04:37.496029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.703 [2024-07-25 14:04:37.496037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.703 [2024-07-25 14:04:37.496054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.703 qpair failed and we were unable to recover it. 00:36:40.703 [2024-07-25 14:04:37.505920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.703 [2024-07-25 14:04:37.506001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.703 [2024-07-25 14:04:37.506018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.703 [2024-07-25 14:04:37.506027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.703 [2024-07-25 14:04:37.506035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.703 [2024-07-25 14:04:37.506052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.703 qpair failed and we were unable to recover it. 00:36:40.703 [2024-07-25 14:04:37.515962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.703 [2024-07-25 14:04:37.516045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.703 [2024-07-25 14:04:37.516065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.703 [2024-07-25 14:04:37.516076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.703 [2024-07-25 14:04:37.516086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.703 [2024-07-25 14:04:37.516104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.703 qpair failed and we were unable to recover it. 00:36:40.703 [2024-07-25 14:04:37.525989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.704 [2024-07-25 14:04:37.526072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.704 [2024-07-25 14:04:37.526094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.704 [2024-07-25 14:04:37.526104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.704 [2024-07-25 14:04:37.526114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.704 [2024-07-25 14:04:37.526132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.704 qpair failed and we were unable to recover it. 00:36:40.704 [2024-07-25 14:04:37.536015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.704 [2024-07-25 14:04:37.536095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.704 [2024-07-25 14:04:37.536113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.704 [2024-07-25 14:04:37.536122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.704 [2024-07-25 14:04:37.536130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.704 [2024-07-25 14:04:37.536148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.704 qpair failed and we were unable to recover it. 00:36:40.704 [2024-07-25 14:04:37.545962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.704 [2024-07-25 14:04:37.546045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.704 [2024-07-25 14:04:37.546063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.704 [2024-07-25 14:04:37.546073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.704 [2024-07-25 14:04:37.546082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.704 [2024-07-25 14:04:37.546099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.704 qpair failed and we were unable to recover it. 00:36:40.704 [2024-07-25 14:04:37.556077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.704 [2024-07-25 14:04:37.556161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.704 [2024-07-25 14:04:37.556178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.704 [2024-07-25 14:04:37.556187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.704 [2024-07-25 14:04:37.556196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.704 [2024-07-25 14:04:37.556213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.704 qpair failed and we were unable to recover it. 00:36:40.704 [2024-07-25 14:04:37.566036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.704 [2024-07-25 14:04:37.566113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.704 [2024-07-25 14:04:37.566131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.704 [2024-07-25 14:04:37.566141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.704 [2024-07-25 14:04:37.566150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.704 [2024-07-25 14:04:37.566170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.704 qpair failed and we were unable to recover it. 00:36:40.704 [2024-07-25 14:04:37.576127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.704 [2024-07-25 14:04:37.576214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.704 [2024-07-25 14:04:37.576232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.704 [2024-07-25 14:04:37.576241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.704 [2024-07-25 14:04:37.576250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.704 [2024-07-25 14:04:37.576267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.704 qpair failed and we were unable to recover it. 00:36:40.704 [2024-07-25 14:04:37.586146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.704 [2024-07-25 14:04:37.586231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.704 [2024-07-25 14:04:37.586248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.704 [2024-07-25 14:04:37.586257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.704 [2024-07-25 14:04:37.586266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.704 [2024-07-25 14:04:37.586283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.704 qpair failed and we were unable to recover it. 00:36:40.964 [2024-07-25 14:04:37.596174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.964 [2024-07-25 14:04:37.596257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.964 [2024-07-25 14:04:37.596275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.964 [2024-07-25 14:04:37.596284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.964 [2024-07-25 14:04:37.596293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.964 [2024-07-25 14:04:37.596310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-07-25 14:04:37.606227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.964 [2024-07-25 14:04:37.606308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.964 [2024-07-25 14:04:37.606325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.964 [2024-07-25 14:04:37.606334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.964 [2024-07-25 14:04:37.606342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.964 [2024-07-25 14:04:37.606358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-07-25 14:04:37.616183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.965 [2024-07-25 14:04:37.616268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.965 [2024-07-25 14:04:37.616290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.965 [2024-07-25 14:04:37.616299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.965 [2024-07-25 14:04:37.616307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.965 [2024-07-25 14:04:37.616324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-07-25 14:04:37.626196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.965 [2024-07-25 14:04:37.626275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.965 [2024-07-25 14:04:37.626293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.965 [2024-07-25 14:04:37.626301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.965 [2024-07-25 14:04:37.626310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.965 [2024-07-25 14:04:37.626327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-07-25 14:04:37.636230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.965 [2024-07-25 14:04:37.636404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.965 [2024-07-25 14:04:37.636423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.965 [2024-07-25 14:04:37.636432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.965 [2024-07-25 14:04:37.636440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.965 [2024-07-25 14:04:37.636458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-07-25 14:04:37.646310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.965 [2024-07-25 14:04:37.646392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.965 [2024-07-25 14:04:37.646410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.965 [2024-07-25 14:04:37.646418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.965 [2024-07-25 14:04:37.646427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.965 [2024-07-25 14:04:37.646444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-07-25 14:04:37.656341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.965 [2024-07-25 14:04:37.656423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.965 [2024-07-25 14:04:37.656440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.965 [2024-07-25 14:04:37.656449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.965 [2024-07-25 14:04:37.656457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.965 [2024-07-25 14:04:37.656480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-07-25 14:04:37.666301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.965 [2024-07-25 14:04:37.666379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.965 [2024-07-25 14:04:37.666396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.965 [2024-07-25 14:04:37.666405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.965 [2024-07-25 14:04:37.666414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.965 [2024-07-25 14:04:37.666430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-07-25 14:04:37.676345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.965 [2024-07-25 14:04:37.676426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.965 [2024-07-25 14:04:37.676444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.965 [2024-07-25 14:04:37.676452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.965 [2024-07-25 14:04:37.676461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.965 [2024-07-25 14:04:37.676477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-07-25 14:04:37.686441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.965 [2024-07-25 14:04:37.686522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.965 [2024-07-25 14:04:37.686540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.965 [2024-07-25 14:04:37.686549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.965 [2024-07-25 14:04:37.686558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.965 [2024-07-25 14:04:37.686574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-07-25 14:04:37.696472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.965 [2024-07-25 14:04:37.696556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.965 [2024-07-25 14:04:37.696574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.965 [2024-07-25 14:04:37.696583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.965 [2024-07-25 14:04:37.696591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.965 [2024-07-25 14:04:37.696609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-07-25 14:04:37.706480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.965 [2024-07-25 14:04:37.706560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.965 [2024-07-25 14:04:37.706581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.965 [2024-07-25 14:04:37.706590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.965 [2024-07-25 14:04:37.706598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.965 [2024-07-25 14:04:37.706615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-07-25 14:04:37.716503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.965 [2024-07-25 14:04:37.716582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.965 [2024-07-25 14:04:37.716599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.965 [2024-07-25 14:04:37.716609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.965 [2024-07-25 14:04:37.716617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.965 [2024-07-25 14:04:37.716634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-07-25 14:04:37.726570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.965 [2024-07-25 14:04:37.726650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.965 [2024-07-25 14:04:37.726666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.965 [2024-07-25 14:04:37.726675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.965 [2024-07-25 14:04:37.726684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.965 [2024-07-25 14:04:37.726700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-07-25 14:04:37.736517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.965 [2024-07-25 14:04:37.736605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.965 [2024-07-25 14:04:37.736622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.965 [2024-07-25 14:04:37.736631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.965 [2024-07-25 14:04:37.736639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.965 [2024-07-25 14:04:37.736656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-07-25 14:04:37.746607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.965 [2024-07-25 14:04:37.746692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.966 [2024-07-25 14:04:37.746709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.966 [2024-07-25 14:04:37.746722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.966 [2024-07-25 14:04:37.746731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.966 [2024-07-25 14:04:37.746750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-07-25 14:04:37.756646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.966 [2024-07-25 14:04:37.756726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.966 [2024-07-25 14:04:37.756744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.966 [2024-07-25 14:04:37.756753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.966 [2024-07-25 14:04:37.756761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.966 [2024-07-25 14:04:37.756779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-07-25 14:04:37.766647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.966 [2024-07-25 14:04:37.766735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.966 [2024-07-25 14:04:37.766753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.966 [2024-07-25 14:04:37.766762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.966 [2024-07-25 14:04:37.766771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.966 [2024-07-25 14:04:37.766787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-07-25 14:04:37.776726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.966 [2024-07-25 14:04:37.776809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.966 [2024-07-25 14:04:37.776827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.966 [2024-07-25 14:04:37.776836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.966 [2024-07-25 14:04:37.776845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.966 [2024-07-25 14:04:37.776861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-07-25 14:04:37.786738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.966 [2024-07-25 14:04:37.786821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.966 [2024-07-25 14:04:37.786839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.966 [2024-07-25 14:04:37.786847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.966 [2024-07-25 14:04:37.786856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.966 [2024-07-25 14:04:37.786873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-07-25 14:04:37.796763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.966 [2024-07-25 14:04:37.796843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.966 [2024-07-25 14:04:37.796863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.966 [2024-07-25 14:04:37.796872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.966 [2024-07-25 14:04:37.796881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.966 [2024-07-25 14:04:37.796898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-07-25 14:04:37.806803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.966 [2024-07-25 14:04:37.806882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.966 [2024-07-25 14:04:37.806899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.966 [2024-07-25 14:04:37.806908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.966 [2024-07-25 14:04:37.806916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.966 [2024-07-25 14:04:37.806933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-07-25 14:04:37.816829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.966 [2024-07-25 14:04:37.816911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.966 [2024-07-25 14:04:37.816929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.966 [2024-07-25 14:04:37.816938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.966 [2024-07-25 14:04:37.816947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.966 [2024-07-25 14:04:37.816963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-07-25 14:04:37.826823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.966 [2024-07-25 14:04:37.826907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.966 [2024-07-25 14:04:37.826924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.966 [2024-07-25 14:04:37.826933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.966 [2024-07-25 14:04:37.826942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.966 [2024-07-25 14:04:37.826959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-07-25 14:04:37.836861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.966 [2024-07-25 14:04:37.836947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.966 [2024-07-25 14:04:37.836964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.966 [2024-07-25 14:04:37.836973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.966 [2024-07-25 14:04:37.836984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.966 [2024-07-25 14:04:37.837002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-07-25 14:04:37.846833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:40.966 [2024-07-25 14:04:37.846924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:40.966 [2024-07-25 14:04:37.846941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:40.966 [2024-07-25 14:04:37.846950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:40.966 [2024-07-25 14:04:37.846958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:40.966 [2024-07-25 14:04:37.846976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:40.966 qpair failed and we were unable to recover it. 00:36:41.226 [2024-07-25 14:04:37.856866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.226 [2024-07-25 14:04:37.856948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.226 [2024-07-25 14:04:37.856965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.226 [2024-07-25 14:04:37.856975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.226 [2024-07-25 14:04:37.856983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.226 [2024-07-25 14:04:37.857000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.226 qpair failed and we were unable to recover it. 00:36:41.226 [2024-07-25 14:04:37.866937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.226 [2024-07-25 14:04:37.867016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.226 [2024-07-25 14:04:37.867033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.226 [2024-07-25 14:04:37.867042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.226 [2024-07-25 14:04:37.867051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.226 [2024-07-25 14:04:37.867068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.226 qpair failed and we were unable to recover it. 00:36:41.226 [2024-07-25 14:04:37.876926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.226 [2024-07-25 14:04:37.877008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.226 [2024-07-25 14:04:37.877025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.226 [2024-07-25 14:04:37.877034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.226 [2024-07-25 14:04:37.877043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.226 [2024-07-25 14:04:37.877059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.226 qpair failed and we were unable to recover it. 00:36:41.226 [2024-07-25 14:04:37.887012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.226 [2024-07-25 14:04:37.887181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.226 [2024-07-25 14:04:37.887199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.226 [2024-07-25 14:04:37.887208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.226 [2024-07-25 14:04:37.887216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.226 [2024-07-25 14:04:37.887234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.226 qpair failed and we were unable to recover it. 00:36:41.226 [2024-07-25 14:04:37.897040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.226 [2024-07-25 14:04:37.897122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.226 [2024-07-25 14:04:37.897139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.226 [2024-07-25 14:04:37.897148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.227 [2024-07-25 14:04:37.897156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.227 [2024-07-25 14:04:37.897173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.227 qpair failed and we were unable to recover it. 00:36:41.227 [2024-07-25 14:04:37.907023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.227 [2024-07-25 14:04:37.907105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.227 [2024-07-25 14:04:37.907122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.227 [2024-07-25 14:04:37.907131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.227 [2024-07-25 14:04:37.907139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.227 [2024-07-25 14:04:37.907156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.227 qpair failed and we were unable to recover it. 00:36:41.227 [2024-07-25 14:04:37.917050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.227 [2024-07-25 14:04:37.917131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.227 [2024-07-25 14:04:37.917149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.227 [2024-07-25 14:04:37.917158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.227 [2024-07-25 14:04:37.917167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.227 [2024-07-25 14:04:37.917183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.227 qpair failed and we were unable to recover it. 00:36:41.227 [2024-07-25 14:04:37.927052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.227 [2024-07-25 14:04:37.927134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.227 [2024-07-25 14:04:37.927151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.227 [2024-07-25 14:04:37.927160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.227 [2024-07-25 14:04:37.927172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.227 [2024-07-25 14:04:37.927189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.227 qpair failed and we were unable to recover it. 00:36:41.227 [2024-07-25 14:04:37.937198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.227 [2024-07-25 14:04:37.937305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.227 [2024-07-25 14:04:37.937321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.227 [2024-07-25 14:04:37.937330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.227 [2024-07-25 14:04:37.937339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.227 [2024-07-25 14:04:37.937356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.227 qpair failed and we were unable to recover it. 00:36:41.227 [2024-07-25 14:04:37.947106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.227 [2024-07-25 14:04:37.947269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.227 [2024-07-25 14:04:37.947287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.227 [2024-07-25 14:04:37.947296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.227 [2024-07-25 14:04:37.947304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.227 [2024-07-25 14:04:37.947321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.227 qpair failed and we were unable to recover it. 00:36:41.227 [2024-07-25 14:04:37.957223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.227 [2024-07-25 14:04:37.957301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.227 [2024-07-25 14:04:37.957319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.227 [2024-07-25 14:04:37.957328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.227 [2024-07-25 14:04:37.957337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.227 [2024-07-25 14:04:37.957354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.227 qpair failed and we were unable to recover it. 00:36:41.227 [2024-07-25 14:04:37.967243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.227 [2024-07-25 14:04:37.967322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.227 [2024-07-25 14:04:37.967339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.227 [2024-07-25 14:04:37.967348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.227 [2024-07-25 14:04:37.967357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.227 [2024-07-25 14:04:37.967374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.227 qpair failed and we were unable to recover it. 00:36:41.227 [2024-07-25 14:04:37.977275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.227 [2024-07-25 14:04:37.977359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.227 [2024-07-25 14:04:37.977376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.227 [2024-07-25 14:04:37.977386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.227 [2024-07-25 14:04:37.977394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.227 [2024-07-25 14:04:37.977410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.227 qpair failed and we were unable to recover it. 00:36:41.227 [2024-07-25 14:04:37.987268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.227 [2024-07-25 14:04:37.987350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.227 [2024-07-25 14:04:37.987367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.227 [2024-07-25 14:04:37.987377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.227 [2024-07-25 14:04:37.987385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.227 [2024-07-25 14:04:37.987402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.227 qpair failed and we were unable to recover it. 00:36:41.227 [2024-07-25 14:04:37.997308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.227 [2024-07-25 14:04:37.997390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.227 [2024-07-25 14:04:37.997408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.227 [2024-07-25 14:04:37.997418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.227 [2024-07-25 14:04:37.997426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.227 [2024-07-25 14:04:37.997444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.227 qpair failed and we were unable to recover it. 00:36:41.227 [2024-07-25 14:04:38.007320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.227 [2024-07-25 14:04:38.007404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.227 [2024-07-25 14:04:38.007421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.227 [2024-07-25 14:04:38.007430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.227 [2024-07-25 14:04:38.007438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.227 [2024-07-25 14:04:38.007455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.227 qpair failed and we were unable to recover it. 00:36:41.227 [2024-07-25 14:04:38.017316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.227 [2024-07-25 14:04:38.017398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.227 [2024-07-25 14:04:38.017416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.227 [2024-07-25 14:04:38.017425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.227 [2024-07-25 14:04:38.017436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.227 [2024-07-25 14:04:38.017454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.227 qpair failed and we were unable to recover it. 00:36:41.227 [2024-07-25 14:04:38.027336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.227 [2024-07-25 14:04:38.027417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.227 [2024-07-25 14:04:38.027434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.227 [2024-07-25 14:04:38.027443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.228 [2024-07-25 14:04:38.027452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.228 [2024-07-25 14:04:38.027468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.228 qpair failed and we were unable to recover it. 00:36:41.228 [2024-07-25 14:04:38.037379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.228 [2024-07-25 14:04:38.037461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.228 [2024-07-25 14:04:38.037478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.228 [2024-07-25 14:04:38.037487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.228 [2024-07-25 14:04:38.037495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.228 [2024-07-25 14:04:38.037512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.228 qpair failed and we were unable to recover it. 00:36:41.228 [2024-07-25 14:04:38.047460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.228 [2024-07-25 14:04:38.047538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.228 [2024-07-25 14:04:38.047556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.228 [2024-07-25 14:04:38.047565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.228 [2024-07-25 14:04:38.047573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.228 [2024-07-25 14:04:38.047590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.228 qpair failed and we were unable to recover it. 00:36:41.228 [2024-07-25 14:04:38.057437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.228 [2024-07-25 14:04:38.057514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.228 [2024-07-25 14:04:38.057531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.228 [2024-07-25 14:04:38.057541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.228 [2024-07-25 14:04:38.057549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.228 [2024-07-25 14:04:38.057566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.228 qpair failed and we were unable to recover it. 00:36:41.228 [2024-07-25 14:04:38.067515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.228 [2024-07-25 14:04:38.067617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.228 [2024-07-25 14:04:38.067634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.228 [2024-07-25 14:04:38.067643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.228 [2024-07-25 14:04:38.067652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.228 [2024-07-25 14:04:38.067669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.228 qpair failed and we were unable to recover it. 00:36:41.228 [2024-07-25 14:04:38.077577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.228 [2024-07-25 14:04:38.077663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.228 [2024-07-25 14:04:38.077680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.228 [2024-07-25 14:04:38.077689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.228 [2024-07-25 14:04:38.077697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.228 [2024-07-25 14:04:38.077720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.228 qpair failed and we were unable to recover it. 00:36:41.228 [2024-07-25 14:04:38.087538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.228 [2024-07-25 14:04:38.087662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.228 [2024-07-25 14:04:38.087680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.228 [2024-07-25 14:04:38.087689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.228 [2024-07-25 14:04:38.087697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.228 [2024-07-25 14:04:38.087719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.228 qpair failed and we were unable to recover it. 00:36:41.228 [2024-07-25 14:04:38.097624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.228 [2024-07-25 14:04:38.097734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.228 [2024-07-25 14:04:38.097751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.228 [2024-07-25 14:04:38.097760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.228 [2024-07-25 14:04:38.097769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.228 [2024-07-25 14:04:38.097786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.228 qpair failed and we were unable to recover it. 00:36:41.228 [2024-07-25 14:04:38.107650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.228 [2024-07-25 14:04:38.107735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.228 [2024-07-25 14:04:38.107753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.228 [2024-07-25 14:04:38.107765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.228 [2024-07-25 14:04:38.107773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.228 [2024-07-25 14:04:38.107791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.228 qpair failed and we were unable to recover it. 00:36:41.488 [2024-07-25 14:04:38.117653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.488 [2024-07-25 14:04:38.117739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.488 [2024-07-25 14:04:38.117757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.488 [2024-07-25 14:04:38.117766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.488 [2024-07-25 14:04:38.117775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.488 [2024-07-25 14:04:38.117792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-07-25 14:04:38.127628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.488 [2024-07-25 14:04:38.127706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.488 [2024-07-25 14:04:38.127728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.488 [2024-07-25 14:04:38.127737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.488 [2024-07-25 14:04:38.127745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.488 [2024-07-25 14:04:38.127762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-07-25 14:04:38.137713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.488 [2024-07-25 14:04:38.137805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.488 [2024-07-25 14:04:38.137822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.488 [2024-07-25 14:04:38.137832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.488 [2024-07-25 14:04:38.137840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.488 [2024-07-25 14:04:38.137857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-07-25 14:04:38.147684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.488 [2024-07-25 14:04:38.147762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.488 [2024-07-25 14:04:38.147780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.488 [2024-07-25 14:04:38.147789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.488 [2024-07-25 14:04:38.147797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.488 [2024-07-25 14:04:38.147814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-07-25 14:04:38.157792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.488 [2024-07-25 14:04:38.157872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.488 [2024-07-25 14:04:38.157889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.488 [2024-07-25 14:04:38.157898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.488 [2024-07-25 14:04:38.157906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.488 [2024-07-25 14:04:38.157923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-07-25 14:04:38.167749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.488 [2024-07-25 14:04:38.167827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.488 [2024-07-25 14:04:38.167844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.488 [2024-07-25 14:04:38.167853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.488 [2024-07-25 14:04:38.167861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.488 [2024-07-25 14:04:38.167879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-07-25 14:04:38.177850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.488 [2024-07-25 14:04:38.177931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.488 [2024-07-25 14:04:38.177948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.488 [2024-07-25 14:04:38.177957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.488 [2024-07-25 14:04:38.177965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.488 [2024-07-25 14:04:38.177982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-07-25 14:04:38.187957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.488 [2024-07-25 14:04:38.188043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.488 [2024-07-25 14:04:38.188061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.488 [2024-07-25 14:04:38.188070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.488 [2024-07-25 14:04:38.188078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.488 [2024-07-25 14:04:38.188096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-07-25 14:04:38.197908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.488 [2024-07-25 14:04:38.197989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.488 [2024-07-25 14:04:38.198007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.488 [2024-07-25 14:04:38.198019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.488 [2024-07-25 14:04:38.198027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.488 [2024-07-25 14:04:38.198044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-07-25 14:04:38.207938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.488 [2024-07-25 14:04:38.208018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.488 [2024-07-25 14:04:38.208035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.488 [2024-07-25 14:04:38.208044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.488 [2024-07-25 14:04:38.208052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.488 [2024-07-25 14:04:38.208069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-07-25 14:04:38.217977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.488 [2024-07-25 14:04:38.218058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.489 [2024-07-25 14:04:38.218076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.489 [2024-07-25 14:04:38.218085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.489 [2024-07-25 14:04:38.218093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.489 [2024-07-25 14:04:38.218110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-07-25 14:04:38.228002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.489 [2024-07-25 14:04:38.228087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.489 [2024-07-25 14:04:38.228104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.489 [2024-07-25 14:04:38.228113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.489 [2024-07-25 14:04:38.228121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.489 [2024-07-25 14:04:38.228138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-07-25 14:04:38.237946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.489 [2024-07-25 14:04:38.238026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.489 [2024-07-25 14:04:38.238043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.489 [2024-07-25 14:04:38.238052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.489 [2024-07-25 14:04:38.238061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.489 [2024-07-25 14:04:38.238077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-07-25 14:04:38.248052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.489 [2024-07-25 14:04:38.248133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.489 [2024-07-25 14:04:38.248150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.489 [2024-07-25 14:04:38.248159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.489 [2024-07-25 14:04:38.248168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.489 [2024-07-25 14:04:38.248184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-07-25 14:04:38.258089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.489 [2024-07-25 14:04:38.258170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.489 [2024-07-25 14:04:38.258187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.489 [2024-07-25 14:04:38.258196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.489 [2024-07-25 14:04:38.258204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.489 [2024-07-25 14:04:38.258221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-07-25 14:04:38.268128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.489 [2024-07-25 14:04:38.268214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.489 [2024-07-25 14:04:38.268232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.489 [2024-07-25 14:04:38.268241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.489 [2024-07-25 14:04:38.268250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.489 [2024-07-25 14:04:38.268267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-07-25 14:04:38.278151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.489 [2024-07-25 14:04:38.278234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.489 [2024-07-25 14:04:38.278251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.489 [2024-07-25 14:04:38.278260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.489 [2024-07-25 14:04:38.278268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.489 [2024-07-25 14:04:38.278284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-07-25 14:04:38.288169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.489 [2024-07-25 14:04:38.288300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.489 [2024-07-25 14:04:38.288317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.489 [2024-07-25 14:04:38.288330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.489 [2024-07-25 14:04:38.288339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.489 [2024-07-25 14:04:38.288356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-07-25 14:04:38.298195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.489 [2024-07-25 14:04:38.298277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.489 [2024-07-25 14:04:38.298293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.489 [2024-07-25 14:04:38.298302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.489 [2024-07-25 14:04:38.298311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.489 [2024-07-25 14:04:38.298327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-07-25 14:04:38.308217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.489 [2024-07-25 14:04:38.308299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.489 [2024-07-25 14:04:38.308316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.489 [2024-07-25 14:04:38.308325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.489 [2024-07-25 14:04:38.308333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.489 [2024-07-25 14:04:38.308350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-07-25 14:04:38.318352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.489 [2024-07-25 14:04:38.318439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.489 [2024-07-25 14:04:38.318457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.489 [2024-07-25 14:04:38.318466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.489 [2024-07-25 14:04:38.318475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.489 [2024-07-25 14:04:38.318492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-07-25 14:04:38.328266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.489 [2024-07-25 14:04:38.328350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.489 [2024-07-25 14:04:38.328367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.489 [2024-07-25 14:04:38.328377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.489 [2024-07-25 14:04:38.328385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.489 [2024-07-25 14:04:38.328402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-07-25 14:04:38.338307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.489 [2024-07-25 14:04:38.338385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.489 [2024-07-25 14:04:38.338402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.489 [2024-07-25 14:04:38.338411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.489 [2024-07-25 14:04:38.338419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.489 [2024-07-25 14:04:38.338437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-07-25 14:04:38.348333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.489 [2024-07-25 14:04:38.348412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.489 [2024-07-25 14:04:38.348429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.489 [2024-07-25 14:04:38.348438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.490 [2024-07-25 14:04:38.348447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.490 [2024-07-25 14:04:38.348464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-07-25 14:04:38.358338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.490 [2024-07-25 14:04:38.358418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.490 [2024-07-25 14:04:38.358435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.490 [2024-07-25 14:04:38.358444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.490 [2024-07-25 14:04:38.358453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.490 [2024-07-25 14:04:38.358469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-07-25 14:04:38.368333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.490 [2024-07-25 14:04:38.368410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.490 [2024-07-25 14:04:38.368427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.490 [2024-07-25 14:04:38.368436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.490 [2024-07-25 14:04:38.368445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.490 [2024-07-25 14:04:38.368461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.750 [2024-07-25 14:04:38.378423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.750 [2024-07-25 14:04:38.378506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.750 [2024-07-25 14:04:38.378527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.750 [2024-07-25 14:04:38.378536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.750 [2024-07-25 14:04:38.378544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.750 [2024-07-25 14:04:38.378561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.750 qpair failed and we were unable to recover it. 00:36:41.750 [2024-07-25 14:04:38.388454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.750 [2024-07-25 14:04:38.388537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.750 [2024-07-25 14:04:38.388554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.750 [2024-07-25 14:04:38.388563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.750 [2024-07-25 14:04:38.388571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.750 [2024-07-25 14:04:38.388588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.750 qpair failed and we were unable to recover it. 00:36:41.750 [2024-07-25 14:04:38.398471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.750 [2024-07-25 14:04:38.398550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.750 [2024-07-25 14:04:38.398567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.750 [2024-07-25 14:04:38.398576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.750 [2024-07-25 14:04:38.398584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.750 [2024-07-25 14:04:38.398600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.750 qpair failed and we were unable to recover it. 00:36:41.750 [2024-07-25 14:04:38.408513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.750 [2024-07-25 14:04:38.408590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.750 [2024-07-25 14:04:38.408607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.750 [2024-07-25 14:04:38.408616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.750 [2024-07-25 14:04:38.408624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.750 [2024-07-25 14:04:38.408640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.750 qpair failed and we were unable to recover it. 00:36:41.750 [2024-07-25 14:04:38.418529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.750 [2024-07-25 14:04:38.418613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.750 [2024-07-25 14:04:38.418630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.750 [2024-07-25 14:04:38.418640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.750 [2024-07-25 14:04:38.418648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.750 [2024-07-25 14:04:38.418664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.750 qpair failed and we were unable to recover it. 00:36:41.750 [2024-07-25 14:04:38.428564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.750 [2024-07-25 14:04:38.428651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.750 [2024-07-25 14:04:38.428668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.750 [2024-07-25 14:04:38.428677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.750 [2024-07-25 14:04:38.428685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.750 [2024-07-25 14:04:38.428702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.750 qpair failed and we were unable to recover it. 00:36:41.750 [2024-07-25 14:04:38.438675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.750 [2024-07-25 14:04:38.438764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.750 [2024-07-25 14:04:38.438781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.750 [2024-07-25 14:04:38.438789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.750 [2024-07-25 14:04:38.438798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.750 [2024-07-25 14:04:38.438815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.750 qpair failed and we were unable to recover it. 00:36:41.750 [2024-07-25 14:04:38.448628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.750 [2024-07-25 14:04:38.448706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.750 [2024-07-25 14:04:38.448728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.750 [2024-07-25 14:04:38.448737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.750 [2024-07-25 14:04:38.448745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.750 [2024-07-25 14:04:38.448762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.750 qpair failed and we were unable to recover it. 00:36:41.750 [2024-07-25 14:04:38.458655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.750 [2024-07-25 14:04:38.458740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.750 [2024-07-25 14:04:38.458757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.750 [2024-07-25 14:04:38.458766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.750 [2024-07-25 14:04:38.458775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.750 [2024-07-25 14:04:38.458792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.750 qpair failed and we were unable to recover it. 00:36:41.750 [2024-07-25 14:04:38.468687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.751 [2024-07-25 14:04:38.468769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.751 [2024-07-25 14:04:38.468789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.751 [2024-07-25 14:04:38.468798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.751 [2024-07-25 14:04:38.468807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.751 [2024-07-25 14:04:38.468824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.751 qpair failed and we were unable to recover it. 00:36:41.751 [2024-07-25 14:04:38.478720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.751 [2024-07-25 14:04:38.478802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.751 [2024-07-25 14:04:38.478819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.751 [2024-07-25 14:04:38.478828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.751 [2024-07-25 14:04:38.478836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.751 [2024-07-25 14:04:38.478854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.751 qpair failed and we were unable to recover it. 00:36:41.751 [2024-07-25 14:04:38.488724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.751 [2024-07-25 14:04:38.488800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.751 [2024-07-25 14:04:38.488818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.751 [2024-07-25 14:04:38.488827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.751 [2024-07-25 14:04:38.488836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.751 [2024-07-25 14:04:38.488853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.751 qpair failed and we were unable to recover it. 00:36:41.751 [2024-07-25 14:04:38.498771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.751 [2024-07-25 14:04:38.498851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.751 [2024-07-25 14:04:38.498868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.751 [2024-07-25 14:04:38.498877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.751 [2024-07-25 14:04:38.498885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.751 [2024-07-25 14:04:38.498902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.751 qpair failed and we were unable to recover it. 00:36:41.751 [2024-07-25 14:04:38.508796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.751 [2024-07-25 14:04:38.508878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.751 [2024-07-25 14:04:38.508897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.751 [2024-07-25 14:04:38.508906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.751 [2024-07-25 14:04:38.508915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.751 [2024-07-25 14:04:38.508936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.751 qpair failed and we were unable to recover it. 00:36:41.751 [2024-07-25 14:04:38.518813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.751 [2024-07-25 14:04:38.518895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.751 [2024-07-25 14:04:38.518912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.751 [2024-07-25 14:04:38.518922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.751 [2024-07-25 14:04:38.518930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.751 [2024-07-25 14:04:38.518948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.751 qpair failed and we were unable to recover it. 00:36:41.751 [2024-07-25 14:04:38.528872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.751 [2024-07-25 14:04:38.528950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.751 [2024-07-25 14:04:38.528967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.751 [2024-07-25 14:04:38.528976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.751 [2024-07-25 14:04:38.528985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.751 [2024-07-25 14:04:38.529001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.751 qpair failed and we were unable to recover it. 00:36:41.751 [2024-07-25 14:04:38.538895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.751 [2024-07-25 14:04:38.538978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.751 [2024-07-25 14:04:38.538995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.751 [2024-07-25 14:04:38.539005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.751 [2024-07-25 14:04:38.539013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.751 [2024-07-25 14:04:38.539030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.751 qpair failed and we were unable to recover it. 00:36:41.751 [2024-07-25 14:04:38.548918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.751 [2024-07-25 14:04:38.549000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.751 [2024-07-25 14:04:38.549017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.751 [2024-07-25 14:04:38.549026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.751 [2024-07-25 14:04:38.549035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.751 [2024-07-25 14:04:38.549051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.751 qpair failed and we were unable to recover it. 00:36:41.751 [2024-07-25 14:04:38.558930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.751 [2024-07-25 14:04:38.559004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.751 [2024-07-25 14:04:38.559025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.751 [2024-07-25 14:04:38.559033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.751 [2024-07-25 14:04:38.559042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.751 [2024-07-25 14:04:38.559059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.751 qpair failed and we were unable to recover it. 00:36:41.751 [2024-07-25 14:04:38.568966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.751 [2024-07-25 14:04:38.569051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.751 [2024-07-25 14:04:38.569069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.751 [2024-07-25 14:04:38.569078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.751 [2024-07-25 14:04:38.569087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.751 [2024-07-25 14:04:38.569104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.751 qpair failed and we were unable to recover it. 00:36:41.751 [2024-07-25 14:04:38.578983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.751 [2024-07-25 14:04:38.579068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.751 [2024-07-25 14:04:38.579086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.751 [2024-07-25 14:04:38.579095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.751 [2024-07-25 14:04:38.579103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.751 [2024-07-25 14:04:38.579120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.751 qpair failed and we were unable to recover it. 00:36:41.751 [2024-07-25 14:04:38.588996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.751 [2024-07-25 14:04:38.589090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.751 [2024-07-25 14:04:38.589107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.751 [2024-07-25 14:04:38.589116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.751 [2024-07-25 14:04:38.589125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.751 [2024-07-25 14:04:38.589142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.751 qpair failed and we were unable to recover it. 00:36:41.751 [2024-07-25 14:04:38.599048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.751 [2024-07-25 14:04:38.599149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.751 [2024-07-25 14:04:38.599166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.752 [2024-07-25 14:04:38.599175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.752 [2024-07-25 14:04:38.599183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.752 [2024-07-25 14:04:38.599204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.752 qpair failed and we were unable to recover it. 00:36:41.752 [2024-07-25 14:04:38.609087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.752 [2024-07-25 14:04:38.609167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.752 [2024-07-25 14:04:38.609184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.752 [2024-07-25 14:04:38.609193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.752 [2024-07-25 14:04:38.609201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.752 [2024-07-25 14:04:38.609218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.752 qpair failed and we were unable to recover it. 00:36:41.752 [2024-07-25 14:04:38.619047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.752 [2024-07-25 14:04:38.619127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.752 [2024-07-25 14:04:38.619144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.752 [2024-07-25 14:04:38.619153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.752 [2024-07-25 14:04:38.619162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.752 [2024-07-25 14:04:38.619179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.752 qpair failed and we were unable to recover it. 00:36:41.752 [2024-07-25 14:04:38.629141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:41.752 [2024-07-25 14:04:38.629220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:41.752 [2024-07-25 14:04:38.629238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:41.752 [2024-07-25 14:04:38.629248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:41.752 [2024-07-25 14:04:38.629256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:41.752 [2024-07-25 14:04:38.629273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.752 qpair failed and we were unable to recover it. 00:36:42.012 [2024-07-25 14:04:38.639169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.012 [2024-07-25 14:04:38.639249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.012 [2024-07-25 14:04:38.639267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.012 [2024-07-25 14:04:38.639276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.012 [2024-07-25 14:04:38.639285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.012 [2024-07-25 14:04:38.639302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.012 qpair failed and we were unable to recover it. 00:36:42.012 [2024-07-25 14:04:38.649192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.012 [2024-07-25 14:04:38.649271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.012 [2024-07-25 14:04:38.649291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.012 [2024-07-25 14:04:38.649301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.012 [2024-07-25 14:04:38.649310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.012 [2024-07-25 14:04:38.649327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.012 qpair failed and we were unable to recover it. 00:36:42.012 [2024-07-25 14:04:38.659207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.012 [2024-07-25 14:04:38.659289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.012 [2024-07-25 14:04:38.659305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.012 [2024-07-25 14:04:38.659314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.012 [2024-07-25 14:04:38.659323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.012 [2024-07-25 14:04:38.659340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.012 qpair failed and we were unable to recover it. 00:36:42.012 [2024-07-25 14:04:38.669249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.012 [2024-07-25 14:04:38.669329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.012 [2024-07-25 14:04:38.669347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.012 [2024-07-25 14:04:38.669356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.012 [2024-07-25 14:04:38.669365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.012 [2024-07-25 14:04:38.669381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.012 qpair failed and we were unable to recover it. 00:36:42.012 [2024-07-25 14:04:38.679283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.012 [2024-07-25 14:04:38.679368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.012 [2024-07-25 14:04:38.679386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.012 [2024-07-25 14:04:38.679394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.013 [2024-07-25 14:04:38.679403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.013 [2024-07-25 14:04:38.679420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.013 qpair failed and we were unable to recover it. 00:36:42.013 [2024-07-25 14:04:38.689235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.013 [2024-07-25 14:04:38.689311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.013 [2024-07-25 14:04:38.689329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.013 [2024-07-25 14:04:38.689338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.013 [2024-07-25 14:04:38.689346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.013 [2024-07-25 14:04:38.689367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.013 qpair failed and we were unable to recover it. 00:36:42.013 [2024-07-25 14:04:38.699338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.013 [2024-07-25 14:04:38.699421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.013 [2024-07-25 14:04:38.699439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.013 [2024-07-25 14:04:38.699447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.013 [2024-07-25 14:04:38.699456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.013 [2024-07-25 14:04:38.699473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.013 qpair failed and we were unable to recover it. 00:36:42.013 [2024-07-25 14:04:38.709375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.013 [2024-07-25 14:04:38.709459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.013 [2024-07-25 14:04:38.709476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.013 [2024-07-25 14:04:38.709485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.013 [2024-07-25 14:04:38.709494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.013 [2024-07-25 14:04:38.709510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.013 qpair failed and we were unable to recover it. 00:36:42.013 [2024-07-25 14:04:38.719402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.013 [2024-07-25 14:04:38.719480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.013 [2024-07-25 14:04:38.719497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.013 [2024-07-25 14:04:38.719506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.013 [2024-07-25 14:04:38.719514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.013 [2024-07-25 14:04:38.719531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.013 qpair failed and we were unable to recover it. 00:36:42.013 [2024-07-25 14:04:38.729426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.013 [2024-07-25 14:04:38.729508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.013 [2024-07-25 14:04:38.729525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.013 [2024-07-25 14:04:38.729534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.013 [2024-07-25 14:04:38.729542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.013 [2024-07-25 14:04:38.729559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.013 qpair failed and we were unable to recover it. 00:36:42.013 [2024-07-25 14:04:38.739459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.013 [2024-07-25 14:04:38.739544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.013 [2024-07-25 14:04:38.739564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.013 [2024-07-25 14:04:38.739573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.013 [2024-07-25 14:04:38.739581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.013 [2024-07-25 14:04:38.739599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.013 qpair failed and we were unable to recover it. 00:36:42.013 [2024-07-25 14:04:38.749487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.013 [2024-07-25 14:04:38.749572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.013 [2024-07-25 14:04:38.749589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.013 [2024-07-25 14:04:38.749599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.013 [2024-07-25 14:04:38.749607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.013 [2024-07-25 14:04:38.749625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.013 qpair failed and we were unable to recover it. 00:36:42.013 [2024-07-25 14:04:38.759526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.013 [2024-07-25 14:04:38.759608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.013 [2024-07-25 14:04:38.759625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.013 [2024-07-25 14:04:38.759634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.013 [2024-07-25 14:04:38.759643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.013 [2024-07-25 14:04:38.759660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.013 qpair failed and we were unable to recover it. 00:36:42.013 [2024-07-25 14:04:38.769537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.013 [2024-07-25 14:04:38.769614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.013 [2024-07-25 14:04:38.769631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.013 [2024-07-25 14:04:38.769641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.013 [2024-07-25 14:04:38.769649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.013 [2024-07-25 14:04:38.769665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.013 qpair failed and we were unable to recover it. 00:36:42.013 [2024-07-25 14:04:38.779568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.013 [2024-07-25 14:04:38.779647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.013 [2024-07-25 14:04:38.779664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.013 [2024-07-25 14:04:38.779673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.013 [2024-07-25 14:04:38.779684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.013 [2024-07-25 14:04:38.779702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.013 qpair failed and we were unable to recover it. 00:36:42.013 [2024-07-25 14:04:38.789574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.013 [2024-07-25 14:04:38.789662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.013 [2024-07-25 14:04:38.789679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.013 [2024-07-25 14:04:38.789688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.013 [2024-07-25 14:04:38.789697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.013 [2024-07-25 14:04:38.789718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.013 qpair failed and we were unable to recover it. 00:36:42.013 [2024-07-25 14:04:38.799623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.013 [2024-07-25 14:04:38.799702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.013 [2024-07-25 14:04:38.799723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.013 [2024-07-25 14:04:38.799732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.013 [2024-07-25 14:04:38.799740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.013 [2024-07-25 14:04:38.799757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.013 qpair failed and we were unable to recover it. 00:36:42.013 [2024-07-25 14:04:38.809628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.013 [2024-07-25 14:04:38.809706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.013 [2024-07-25 14:04:38.809727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.013 [2024-07-25 14:04:38.809737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.013 [2024-07-25 14:04:38.809745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.014 [2024-07-25 14:04:38.809762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.014 qpair failed and we were unable to recover it. 00:36:42.014 [2024-07-25 14:04:38.819688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.014 [2024-07-25 14:04:38.819776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.014 [2024-07-25 14:04:38.819793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.014 [2024-07-25 14:04:38.819802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.014 [2024-07-25 14:04:38.819811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.014 [2024-07-25 14:04:38.819828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.014 qpair failed and we were unable to recover it. 00:36:42.014 [2024-07-25 14:04:38.829702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.014 [2024-07-25 14:04:38.829790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.014 [2024-07-25 14:04:38.829808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.014 [2024-07-25 14:04:38.829816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.014 [2024-07-25 14:04:38.829825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.014 [2024-07-25 14:04:38.829841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.014 qpair failed and we were unable to recover it. 00:36:42.014 [2024-07-25 14:04:38.839746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.014 [2024-07-25 14:04:38.839832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.014 [2024-07-25 14:04:38.839849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.014 [2024-07-25 14:04:38.839858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.014 [2024-07-25 14:04:38.839867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.014 [2024-07-25 14:04:38.839884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.014 qpair failed and we were unable to recover it. 00:36:42.014 [2024-07-25 14:04:38.849769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.014 [2024-07-25 14:04:38.849852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.014 [2024-07-25 14:04:38.849869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.014 [2024-07-25 14:04:38.849878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.014 [2024-07-25 14:04:38.849886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.014 [2024-07-25 14:04:38.849903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.014 qpair failed and we were unable to recover it. 00:36:42.014 [2024-07-25 14:04:38.859807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.014 [2024-07-25 14:04:38.859889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.014 [2024-07-25 14:04:38.859906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.014 [2024-07-25 14:04:38.859915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.014 [2024-07-25 14:04:38.859924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.014 [2024-07-25 14:04:38.859941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.014 qpair failed and we were unable to recover it. 00:36:42.014 [2024-07-25 14:04:38.869810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.014 [2024-07-25 14:04:38.869891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.014 [2024-07-25 14:04:38.869908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.014 [2024-07-25 14:04:38.869917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.014 [2024-07-25 14:04:38.869928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.014 [2024-07-25 14:04:38.869945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.014 qpair failed and we were unable to recover it. 00:36:42.014 [2024-07-25 14:04:38.879856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.014 [2024-07-25 14:04:38.879939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.014 [2024-07-25 14:04:38.879956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.014 [2024-07-25 14:04:38.879965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.014 [2024-07-25 14:04:38.879974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.014 [2024-07-25 14:04:38.879990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.014 qpair failed and we were unable to recover it. 00:36:42.014 [2024-07-25 14:04:38.889882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.014 [2024-07-25 14:04:38.889960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.014 [2024-07-25 14:04:38.889978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.014 [2024-07-25 14:04:38.889987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.014 [2024-07-25 14:04:38.889995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.014 [2024-07-25 14:04:38.890013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.014 qpair failed and we were unable to recover it. 00:36:42.274 [2024-07-25 14:04:38.899921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.274 [2024-07-25 14:04:38.900003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.274 [2024-07-25 14:04:38.900020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.274 [2024-07-25 14:04:38.900030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.274 [2024-07-25 14:04:38.900038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.274 [2024-07-25 14:04:38.900055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-07-25 14:04:38.909953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.275 [2024-07-25 14:04:38.910034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.275 [2024-07-25 14:04:38.910052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.275 [2024-07-25 14:04:38.910060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.275 [2024-07-25 14:04:38.910069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.275 [2024-07-25 14:04:38.910086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-07-25 14:04:38.919981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.275 [2024-07-25 14:04:38.920164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.275 [2024-07-25 14:04:38.920182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.275 [2024-07-25 14:04:38.920191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.275 [2024-07-25 14:04:38.920199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.275 [2024-07-25 14:04:38.920217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-07-25 14:04:38.930001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.275 [2024-07-25 14:04:38.930079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.275 [2024-07-25 14:04:38.930096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.275 [2024-07-25 14:04:38.930105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.275 [2024-07-25 14:04:38.930114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.275 [2024-07-25 14:04:38.930130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-07-25 14:04:38.940033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.275 [2024-07-25 14:04:38.940110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.275 [2024-07-25 14:04:38.940127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.275 [2024-07-25 14:04:38.940137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.275 [2024-07-25 14:04:38.940145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.275 [2024-07-25 14:04:38.940162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-07-25 14:04:38.950057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.275 [2024-07-25 14:04:38.950138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.275 [2024-07-25 14:04:38.950154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.275 [2024-07-25 14:04:38.950164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.275 [2024-07-25 14:04:38.950172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.275 [2024-07-25 14:04:38.950189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-07-25 14:04:38.960092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.275 [2024-07-25 14:04:38.960173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.275 [2024-07-25 14:04:38.960190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.275 [2024-07-25 14:04:38.960199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.275 [2024-07-25 14:04:38.960210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.275 [2024-07-25 14:04:38.960228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-07-25 14:04:38.970112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.275 [2024-07-25 14:04:38.970195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.275 [2024-07-25 14:04:38.970211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.275 [2024-07-25 14:04:38.970220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.275 [2024-07-25 14:04:38.970229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.275 [2024-07-25 14:04:38.970246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-07-25 14:04:38.980141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.275 [2024-07-25 14:04:38.980222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.275 [2024-07-25 14:04:38.980239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.275 [2024-07-25 14:04:38.980248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.275 [2024-07-25 14:04:38.980256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.275 [2024-07-25 14:04:38.980273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-07-25 14:04:38.990147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.275 [2024-07-25 14:04:38.990222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.275 [2024-07-25 14:04:38.990239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.275 [2024-07-25 14:04:38.990248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.275 [2024-07-25 14:04:38.990257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.275 [2024-07-25 14:04:38.990272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-07-25 14:04:39.000205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.275 [2024-07-25 14:04:39.000282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.275 [2024-07-25 14:04:39.000299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.275 [2024-07-25 14:04:39.000308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.275 [2024-07-25 14:04:39.000316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.275 [2024-07-25 14:04:39.000334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-07-25 14:04:39.010256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.275 [2024-07-25 14:04:39.010338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.275 [2024-07-25 14:04:39.010355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.275 [2024-07-25 14:04:39.010364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.275 [2024-07-25 14:04:39.010373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.275 [2024-07-25 14:04:39.010390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-07-25 14:04:39.020263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.275 [2024-07-25 14:04:39.020344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.275 [2024-07-25 14:04:39.020361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.275 [2024-07-25 14:04:39.020370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.275 [2024-07-25 14:04:39.020379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.275 [2024-07-25 14:04:39.020395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-07-25 14:04:39.030214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.275 [2024-07-25 14:04:39.030294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.275 [2024-07-25 14:04:39.030312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.275 [2024-07-25 14:04:39.030321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.275 [2024-07-25 14:04:39.030330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.275 [2024-07-25 14:04:39.030347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.276 [2024-07-25 14:04:39.040296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.276 [2024-07-25 14:04:39.040379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.276 [2024-07-25 14:04:39.040396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.276 [2024-07-25 14:04:39.040405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.276 [2024-07-25 14:04:39.040413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.276 [2024-07-25 14:04:39.040430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-07-25 14:04:39.050277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.276 [2024-07-25 14:04:39.050362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.276 [2024-07-25 14:04:39.050379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.276 [2024-07-25 14:04:39.050392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.276 [2024-07-25 14:04:39.050400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.276 [2024-07-25 14:04:39.050418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-07-25 14:04:39.060439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.276 [2024-07-25 14:04:39.060521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.276 [2024-07-25 14:04:39.060538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.276 [2024-07-25 14:04:39.060547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.276 [2024-07-25 14:04:39.060555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.276 [2024-07-25 14:04:39.060572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-07-25 14:04:39.070340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.276 [2024-07-25 14:04:39.070430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.276 [2024-07-25 14:04:39.070448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.276 [2024-07-25 14:04:39.070456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.276 [2024-07-25 14:04:39.070465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.276 [2024-07-25 14:04:39.070482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-07-25 14:04:39.080494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.276 [2024-07-25 14:04:39.080573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.276 [2024-07-25 14:04:39.080590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.276 [2024-07-25 14:04:39.080599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.276 [2024-07-25 14:04:39.080608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.276 [2024-07-25 14:04:39.080625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-07-25 14:04:39.090475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.276 [2024-07-25 14:04:39.090556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.276 [2024-07-25 14:04:39.090573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.276 [2024-07-25 14:04:39.090583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.276 [2024-07-25 14:04:39.090591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.276 [2024-07-25 14:04:39.090608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-07-25 14:04:39.100426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.276 [2024-07-25 14:04:39.100505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.276 [2024-07-25 14:04:39.100522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.276 [2024-07-25 14:04:39.100531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.276 [2024-07-25 14:04:39.100540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.276 [2024-07-25 14:04:39.100557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-07-25 14:04:39.110530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.276 [2024-07-25 14:04:39.110711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.276 [2024-07-25 14:04:39.110732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.276 [2024-07-25 14:04:39.110741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.276 [2024-07-25 14:04:39.110749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.276 [2024-07-25 14:04:39.110766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-07-25 14:04:39.120487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.276 [2024-07-25 14:04:39.120566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.276 [2024-07-25 14:04:39.120583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.276 [2024-07-25 14:04:39.120592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.276 [2024-07-25 14:04:39.120601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.276 [2024-07-25 14:04:39.120618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-07-25 14:04:39.130593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.276 [2024-07-25 14:04:39.130703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.276 [2024-07-25 14:04:39.130724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.276 [2024-07-25 14:04:39.130734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.276 [2024-07-25 14:04:39.130742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.276 [2024-07-25 14:04:39.130759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-07-25 14:04:39.140615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.276 [2024-07-25 14:04:39.140693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.276 [2024-07-25 14:04:39.140710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.276 [2024-07-25 14:04:39.140727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.276 [2024-07-25 14:04:39.140736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.276 [2024-07-25 14:04:39.140753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-07-25 14:04:39.150633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.276 [2024-07-25 14:04:39.150728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.276 [2024-07-25 14:04:39.150746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.276 [2024-07-25 14:04:39.150755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.276 [2024-07-25 14:04:39.150763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.276 [2024-07-25 14:04:39.150781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-07-25 14:04:39.160661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.276 [2024-07-25 14:04:39.160744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.276 [2024-07-25 14:04:39.160762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.276 [2024-07-25 14:04:39.160771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.277 [2024-07-25 14:04:39.160779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.277 [2024-07-25 14:04:39.160796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.537 [2024-07-25 14:04:39.170685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.537 [2024-07-25 14:04:39.170770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.537 [2024-07-25 14:04:39.170787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.537 [2024-07-25 14:04:39.170796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.537 [2024-07-25 14:04:39.170805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.537 [2024-07-25 14:04:39.170821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-07-25 14:04:39.180722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.537 [2024-07-25 14:04:39.180808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.537 [2024-07-25 14:04:39.180826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.537 [2024-07-25 14:04:39.180836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.537 [2024-07-25 14:04:39.180844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.537 [2024-07-25 14:04:39.180861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-07-25 14:04:39.190742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.537 [2024-07-25 14:04:39.190828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.537 [2024-07-25 14:04:39.190845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.537 [2024-07-25 14:04:39.190854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.537 [2024-07-25 14:04:39.190863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.537 [2024-07-25 14:04:39.190880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-07-25 14:04:39.200685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.537 [2024-07-25 14:04:39.200765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.537 [2024-07-25 14:04:39.200782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.537 [2024-07-25 14:04:39.200791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.537 [2024-07-25 14:04:39.200799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.537 [2024-07-25 14:04:39.200817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-07-25 14:04:39.210791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.537 [2024-07-25 14:04:39.210874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.537 [2024-07-25 14:04:39.210892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.537 [2024-07-25 14:04:39.210901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.537 [2024-07-25 14:04:39.210909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.537 [2024-07-25 14:04:39.210926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-07-25 14:04:39.220786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.537 [2024-07-25 14:04:39.220870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.537 [2024-07-25 14:04:39.220887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.537 [2024-07-25 14:04:39.220896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.537 [2024-07-25 14:04:39.220904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.537 [2024-07-25 14:04:39.220921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-07-25 14:04:39.230864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.538 [2024-07-25 14:04:39.230945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.538 [2024-07-25 14:04:39.230963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.538 [2024-07-25 14:04:39.230975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.538 [2024-07-25 14:04:39.230983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.538 [2024-07-25 14:04:39.231001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-07-25 14:04:39.240861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.538 [2024-07-25 14:04:39.240941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.538 [2024-07-25 14:04:39.240959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.538 [2024-07-25 14:04:39.240967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.538 [2024-07-25 14:04:39.240976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.538 [2024-07-25 14:04:39.240993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-07-25 14:04:39.250826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.538 [2024-07-25 14:04:39.250908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.538 [2024-07-25 14:04:39.250925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.538 [2024-07-25 14:04:39.250933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.538 [2024-07-25 14:04:39.250942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.538 [2024-07-25 14:04:39.250958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-07-25 14:04:39.260922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.538 [2024-07-25 14:04:39.261004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.538 [2024-07-25 14:04:39.261022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.538 [2024-07-25 14:04:39.261031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.538 [2024-07-25 14:04:39.261039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.538 [2024-07-25 14:04:39.261056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-07-25 14:04:39.270950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.538 [2024-07-25 14:04:39.271032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.538 [2024-07-25 14:04:39.271051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.538 [2024-07-25 14:04:39.271060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.538 [2024-07-25 14:04:39.271069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.538 [2024-07-25 14:04:39.271086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-07-25 14:04:39.280994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.538 [2024-07-25 14:04:39.281073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.538 [2024-07-25 14:04:39.281091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.538 [2024-07-25 14:04:39.281100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.538 [2024-07-25 14:04:39.281108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.538 [2024-07-25 14:04:39.281125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-07-25 14:04:39.291022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.538 [2024-07-25 14:04:39.291101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.538 [2024-07-25 14:04:39.291119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.538 [2024-07-25 14:04:39.291127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.538 [2024-07-25 14:04:39.291136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.538 [2024-07-25 14:04:39.291153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-07-25 14:04:39.301048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.538 [2024-07-25 14:04:39.301133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.538 [2024-07-25 14:04:39.301150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.538 [2024-07-25 14:04:39.301159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.538 [2024-07-25 14:04:39.301167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.538 [2024-07-25 14:04:39.301184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-07-25 14:04:39.311065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.538 [2024-07-25 14:04:39.311150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.538 [2024-07-25 14:04:39.311168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.538 [2024-07-25 14:04:39.311177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.538 [2024-07-25 14:04:39.311185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.538 [2024-07-25 14:04:39.311202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-07-25 14:04:39.321093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.538 [2024-07-25 14:04:39.321179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.538 [2024-07-25 14:04:39.321201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.538 [2024-07-25 14:04:39.321211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.538 [2024-07-25 14:04:39.321220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.538 [2024-07-25 14:04:39.321237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-07-25 14:04:39.331120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.538 [2024-07-25 14:04:39.331205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.538 [2024-07-25 14:04:39.331222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.538 [2024-07-25 14:04:39.331231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.538 [2024-07-25 14:04:39.331239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.538 [2024-07-25 14:04:39.331256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-07-25 14:04:39.341162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.538 [2024-07-25 14:04:39.341241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.538 [2024-07-25 14:04:39.341258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.538 [2024-07-25 14:04:39.341267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.538 [2024-07-25 14:04:39.341275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.538 [2024-07-25 14:04:39.341292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-07-25 14:04:39.351189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.538 [2024-07-25 14:04:39.351270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.538 [2024-07-25 14:04:39.351287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.538 [2024-07-25 14:04:39.351296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.538 [2024-07-25 14:04:39.351305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.538 [2024-07-25 14:04:39.351322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-07-25 14:04:39.361186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.539 [2024-07-25 14:04:39.361266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.539 [2024-07-25 14:04:39.361283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.539 [2024-07-25 14:04:39.361292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.539 [2024-07-25 14:04:39.361300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.539 [2024-07-25 14:04:39.361316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-07-25 14:04:39.371249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.539 [2024-07-25 14:04:39.371329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.539 [2024-07-25 14:04:39.371346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.539 [2024-07-25 14:04:39.371355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.539 [2024-07-25 14:04:39.371363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.539 [2024-07-25 14:04:39.371380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-07-25 14:04:39.381253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.539 [2024-07-25 14:04:39.381335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.539 [2024-07-25 14:04:39.381352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.539 [2024-07-25 14:04:39.381361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.539 [2024-07-25 14:04:39.381369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.539 [2024-07-25 14:04:39.381386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-07-25 14:04:39.391221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.539 [2024-07-25 14:04:39.391299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.539 [2024-07-25 14:04:39.391316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.539 [2024-07-25 14:04:39.391325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.539 [2024-07-25 14:04:39.391333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.539 [2024-07-25 14:04:39.391350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-07-25 14:04:39.401336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.539 [2024-07-25 14:04:39.401421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.539 [2024-07-25 14:04:39.401439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.539 [2024-07-25 14:04:39.401448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.539 [2024-07-25 14:04:39.401456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.539 [2024-07-25 14:04:39.401473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-07-25 14:04:39.411280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.539 [2024-07-25 14:04:39.411356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.539 [2024-07-25 14:04:39.411376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.539 [2024-07-25 14:04:39.411386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.539 [2024-07-25 14:04:39.411394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.539 [2024-07-25 14:04:39.411411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-07-25 14:04:39.421379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.539 [2024-07-25 14:04:39.421478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.539 [2024-07-25 14:04:39.421495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.539 [2024-07-25 14:04:39.421504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.539 [2024-07-25 14:04:39.421513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.539 [2024-07-25 14:04:39.421529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.798 [2024-07-25 14:04:39.431340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.798 [2024-07-25 14:04:39.431417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.798 [2024-07-25 14:04:39.431435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.798 [2024-07-25 14:04:39.431444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.798 [2024-07-25 14:04:39.431452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.798 [2024-07-25 14:04:39.431469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-07-25 14:04:39.441422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.798 [2024-07-25 14:04:39.441511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.798 [2024-07-25 14:04:39.441528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.798 [2024-07-25 14:04:39.441537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.798 [2024-07-25 14:04:39.441546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.798 [2024-07-25 14:04:39.441564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-07-25 14:04:39.451493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.798 [2024-07-25 14:04:39.451583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.798 [2024-07-25 14:04:39.451601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.798 [2024-07-25 14:04:39.451610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.798 [2024-07-25 14:04:39.451618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f7b30 00:36:42.798 [2024-07-25 14:04:39.451639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-07-25 14:04:39.461481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.799 [2024-07-25 14:04:39.461586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.799 [2024-07-25 14:04:39.461615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.799 [2024-07-25 14:04:39.461630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.799 [2024-07-25 14:04:39.461643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0594000b90 00:36:42.799 [2024-07-25 14:04:39.461670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-07-25 14:04:39.471467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.799 [2024-07-25 14:04:39.471554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.799 [2024-07-25 14:04:39.471571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.799 [2024-07-25 14:04:39.471581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.799 [2024-07-25 14:04:39.471589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0594000b90 00:36:42.799 [2024-07-25 14:04:39.471608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-07-25 14:04:39.481538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.799 [2024-07-25 14:04:39.481620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.799 [2024-07-25 14:04:39.481642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.799 [2024-07-25 14:04:39.481653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.799 [2024-07-25 14:04:39.481661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:42.799 [2024-07-25 14:04:39.481682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-07-25 14:04:39.491564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.799 [2024-07-25 14:04:39.491650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.799 [2024-07-25 14:04:39.491668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.799 [2024-07-25 14:04:39.491678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.799 [2024-07-25 14:04:39.491686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f059c000b90 00:36:42.799 [2024-07-25 14:04:39.491704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-07-25 14:04:39.491781] nvme_ctrlr.c:4480:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:36:42.799 A controller has encountered a failure and is being reset. 00:36:42.799 Controller properly reset. 00:36:42.799 Initializing NVMe Controllers 00:36:42.799 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:42.799 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:42.799 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:42.799 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:42.799 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:42.799 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:42.799 Initialization complete. Launching workers. 00:36:42.799 Starting thread on core 1 00:36:42.799 Starting thread on core 2 00:36:42.799 Starting thread on core 3 00:36:42.799 Starting thread on core 0 00:36:42.799 14:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:42.799 00:36:42.799 real 0m11.409s 00:36:42.799 user 0m20.923s 00:36:42.799 sys 0m4.863s 00:36:42.799 14:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:42.799 14:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:42.799 ************************************ 00:36:42.799 END TEST nvmf_target_disconnect_tc2 00:36:42.799 ************************************ 00:36:43.057 14:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:43.057 14:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:43.057 14:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:43.057 14:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:43.057 14:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:36:43.057 14:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:43.057 14:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:36:43.057 14:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:43.057 14:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:43.057 rmmod nvme_tcp 00:36:43.057 rmmod nvme_fabrics 00:36:43.057 rmmod nvme_keyring 00:36:43.057 14:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:43.057 14:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:36:43.057 14:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:36:43.057 14:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 516671 ']' 00:36:43.057 14:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 516671 00:36:43.057 14:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 516671 ']' 00:36:43.057 14:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 516671 00:36:43.058 14:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:36:43.058 14:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:43.058 14:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 516671 00:36:43.058 14:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:36:43.058 14:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:36:43.058 14:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 516671' 00:36:43.058 killing process with pid 516671 00:36:43.058 14:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 516671 00:36:43.058 14:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 516671 00:36:43.316 14:04:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:43.316 14:04:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:43.316 14:04:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:43.316 14:04:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:43.316 14:04:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:43.316 14:04:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:43.316 14:04:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:43.316 14:04:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:45.228 14:04:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:45.228 00:36:45.228 real 0m20.538s 00:36:45.228 user 0m48.658s 00:36:45.228 sys 0m10.100s 00:36:45.228 14:04:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:45.228 14:04:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:45.228 ************************************ 00:36:45.228 END TEST nvmf_target_disconnect 00:36:45.228 ************************************ 00:36:45.487 14:04:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:45.487 00:36:45.487 real 7m45.479s 00:36:45.487 user 17m34.690s 00:36:45.487 sys 2m29.795s 00:36:45.487 14:04:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:45.487 14:04:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.487 ************************************ 00:36:45.487 END TEST nvmf_host 00:36:45.487 ************************************ 00:36:45.487 00:36:45.487 real 30m21.999s 00:36:45.487 user 75m13.136s 00:36:45.487 sys 9m54.813s 00:36:45.487 14:04:42 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:45.487 14:04:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:45.487 ************************************ 00:36:45.487 END TEST nvmf_tcp 00:36:45.487 ************************************ 00:36:45.487 14:04:42 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:36:45.487 14:04:42 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:45.487 14:04:42 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:45.487 14:04:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:45.487 14:04:42 -- common/autotest_common.sh@10 -- # set +x 00:36:45.487 ************************************ 00:36:45.487 START TEST spdkcli_nvmf_tcp 00:36:45.487 ************************************ 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:45.487 * Looking for test storage... 00:36:45.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:45.487 14:04:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:45.747 14:04:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:45.747 14:04:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=518161 00:36:45.747 14:04:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 518161 00:36:45.747 14:04:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:45.747 14:04:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 518161 ']' 00:36:45.747 14:04:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:45.747 14:04:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:45.747 14:04:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:45.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:45.747 14:04:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:45.747 14:04:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:45.747 [2024-07-25 14:04:42.425559] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:36:45.747 [2024-07-25 14:04:42.425611] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid518161 ] 00:36:45.747 EAL: No free 2048 kB hugepages reported on node 1 00:36:45.747 [2024-07-25 14:04:42.460836] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:45.747 [2024-07-25 14:04:42.495068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:45.747 [2024-07-25 14:04:42.535345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:45.747 [2024-07-25 14:04:42.535350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:46.690 14:04:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:46.691 14:04:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:36:46.691 14:04:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:46.691 14:04:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:46.691 14:04:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:46.691 14:04:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:46.691 14:04:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:36:46.691 14:04:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:46.691 14:04:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:46.691 14:04:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:46.691 14:04:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:46.691 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:46.691 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:46.691 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:46.691 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:46.691 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:46.691 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:46.691 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:46.691 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:46.691 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:46.691 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:46.691 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:46.691 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:46.691 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:46.691 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:46.691 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:46.691 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:46.692 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:46.692 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:46.692 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:46.692 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:46.692 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:46.692 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:46.692 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:36:46.692 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:46.692 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:46.692 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:46.692 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:46.692 ' 00:36:49.229 [2024-07-25 14:04:45.641945] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:50.165 [2024-07-25 14:04:46.817823] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:52.703 [2024-07-25 14:04:48.980326] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:54.081 [2024-07-25 14:04:50.838187] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:55.459 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:55.459 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:55.459 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:55.459 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:55.459 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:55.459 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:55.459 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:55.459 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:55.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:55.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:55.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:55.459 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:55.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:55.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:55.459 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:55.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:55.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:55.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:55.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:55.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:55.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:55.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:55.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:55.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:55.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:55.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:55.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:55.459 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:55.723 14:04:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:55.723 14:04:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:55.723 14:04:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:55.723 14:04:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:55.723 14:04:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:55.723 14:04:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:55.723 14:04:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:55.723 14:04:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:56.058 14:04:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:56.058 14:04:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:56.058 14:04:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:56.058 14:04:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:56.058 14:04:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:56.058 14:04:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:56.058 14:04:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:56.058 14:04:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:56.058 14:04:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:56.058 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:56.058 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:56.058 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:56.058 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:56.058 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:56.058 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:56.058 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:56.058 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:56.058 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:56.058 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:56.058 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:56.058 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:56.058 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:56.058 ' 00:37:01.330 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:37:01.330 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:37:01.330 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:01.330 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:37:01.330 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:37:01.330 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:37:01.330 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:37:01.330 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:01.330 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:37:01.330 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:37:01.330 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:37:01.330 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:37:01.330 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:37:01.330 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:37:01.330 14:04:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:37:01.330 14:04:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:01.330 14:04:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:01.330 14:04:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 518161 00:37:01.330 14:04:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 518161 ']' 00:37:01.330 14:04:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 518161 00:37:01.330 14:04:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:37:01.330 14:04:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:01.330 14:04:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 518161 00:37:01.330 14:04:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:01.330 14:04:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:01.330 14:04:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 518161' 00:37:01.330 killing process with pid 518161 00:37:01.330 14:04:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 518161 00:37:01.330 14:04:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 518161 00:37:01.330 14:04:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:37:01.330 14:04:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:37:01.330 14:04:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 518161 ']' 00:37:01.330 14:04:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 518161 00:37:01.330 14:04:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 518161 ']' 00:37:01.330 14:04:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 518161 00:37:01.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (518161) - No such process 00:37:01.330 14:04:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 518161 is not found' 00:37:01.330 Process with pid 518161 is not found 00:37:01.330 14:04:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:37:01.330 14:04:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:37:01.330 14:04:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:37:01.330 00:37:01.330 real 0m15.840s 00:37:01.330 user 0m32.677s 00:37:01.330 sys 0m0.860s 00:37:01.330 14:04:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:01.330 14:04:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:01.330 ************************************ 00:37:01.330 END TEST spdkcli_nvmf_tcp 00:37:01.330 ************************************ 00:37:01.330 14:04:58 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:01.330 14:04:58 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:01.330 14:04:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:01.330 14:04:58 -- common/autotest_common.sh@10 -- # set +x 00:37:01.330 ************************************ 00:37:01.330 START TEST nvmf_identify_passthru 00:37:01.330 ************************************ 00:37:01.330 14:04:58 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:01.590 * Looking for test storage... 00:37:01.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:01.590 14:04:58 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:01.590 14:04:58 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:01.590 14:04:58 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:01.590 14:04:58 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:01.590 14:04:58 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.590 14:04:58 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.590 14:04:58 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.590 14:04:58 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:01.590 14:04:58 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:01.590 14:04:58 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:01.590 14:04:58 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:01.590 14:04:58 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:01.590 14:04:58 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:01.590 14:04:58 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.590 14:04:58 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.590 14:04:58 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.590 14:04:58 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:01.590 14:04:58 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.590 14:04:58 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:01.590 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:01.591 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:01.591 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:01.591 14:04:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:01.591 14:04:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:01.591 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:01.591 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:01.591 14:04:58 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:37:01.591 14:04:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:08.162 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:08.162 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:08.162 Found net devices under 0000:af:00.0: cvl_0_0 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:08.162 Found net devices under 0000:af:00.1: cvl_0_1 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:08.162 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:08.163 14:05:04 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:08.422 14:05:05 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:08.422 14:05:05 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:08.422 14:05:05 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:08.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:08.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:37:08.422 00:37:08.422 --- 10.0.0.2 ping statistics --- 00:37:08.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:08.422 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:37:08.422 14:05:05 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:08.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:08.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:37:08.422 00:37:08.422 --- 10.0.0.1 ping statistics --- 00:37:08.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:08.422 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:37:08.422 14:05:05 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:08.422 14:05:05 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:37:08.422 14:05:05 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:08.422 14:05:05 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:08.422 14:05:05 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:08.422 14:05:05 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:08.422 14:05:05 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:08.422 14:05:05 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:08.422 14:05:05 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:08.422 14:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:37:08.422 14:05:05 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:08.422 14:05:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:08.422 14:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:37:08.422 14:05:05 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:37:08.422 14:05:05 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:37:08.422 14:05:05 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:37:08.422 14:05:05 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:37:08.422 14:05:05 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:37:08.422 14:05:05 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:37:08.422 14:05:05 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:08.422 14:05:05 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:08.423 14:05:05 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:37:08.423 14:05:05 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:37:08.423 14:05:05 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:37:08.423 14:05:05 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:d8:00.0 00:37:08.423 14:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:d8:00.0 00:37:08.423 14:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:d8:00.0 ']' 00:37:08.423 14:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:37:08.423 14:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:37:08.423 14:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:37:08.682 EAL: No free 2048 kB hugepages reported on node 1 00:37:13.956 14:05:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLN916500W71P6AGN 00:37:13.956 14:05:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:37:13.956 14:05:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:37:13.956 14:05:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:37:13.956 EAL: No free 2048 kB hugepages reported on node 1 00:37:18.152 14:05:14 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:37:18.152 14:05:14 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:37:18.152 14:05:14 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:18.152 14:05:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:18.152 14:05:14 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:37:18.152 14:05:14 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:18.152 14:05:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:18.152 14:05:14 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=525570 00:37:18.152 14:05:14 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:37:18.152 14:05:14 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:18.152 14:05:14 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 525570 00:37:18.152 14:05:14 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 525570 ']' 00:37:18.152 14:05:14 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:18.152 14:05:14 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:18.152 14:05:14 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:18.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:18.152 14:05:14 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:18.152 14:05:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:18.152 [2024-07-25 14:05:14.824330] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:37:18.152 [2024-07-25 14:05:14.824389] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:18.152 EAL: No free 2048 kB hugepages reported on node 1 00:37:18.152 [2024-07-25 14:05:14.865779] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:18.152 [2024-07-25 14:05:14.900238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:18.152 [2024-07-25 14:05:14.941202] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:18.152 [2024-07-25 14:05:14.941242] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:18.152 [2024-07-25 14:05:14.941252] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:18.152 [2024-07-25 14:05:14.941260] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:18.152 [2024-07-25 14:05:14.941267] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:18.152 [2024-07-25 14:05:14.941309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:18.152 [2024-07-25 14:05:14.941329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:18.152 [2024-07-25 14:05:14.941420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:37:18.152 [2024-07-25 14:05:14.941422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:19.090 14:05:15 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:19.090 14:05:15 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:37:19.090 14:05:15 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:37:19.090 14:05:15 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.090 14:05:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:19.090 INFO: Log level set to 20 00:37:19.090 INFO: Requests: 00:37:19.090 { 00:37:19.090 "jsonrpc": "2.0", 00:37:19.090 "method": "nvmf_set_config", 00:37:19.090 "id": 1, 00:37:19.090 "params": { 00:37:19.090 "admin_cmd_passthru": { 00:37:19.090 "identify_ctrlr": true 00:37:19.090 } 00:37:19.090 } 00:37:19.090 } 00:37:19.090 00:37:19.090 INFO: response: 00:37:19.090 { 00:37:19.090 "jsonrpc": "2.0", 00:37:19.090 "id": 1, 00:37:19.090 "result": true 00:37:19.090 } 00:37:19.090 00:37:19.090 14:05:15 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.090 14:05:15 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:37:19.090 14:05:15 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.090 14:05:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:19.090 INFO: Setting log level to 20 00:37:19.090 INFO: Setting log level to 20 00:37:19.090 INFO: Log level set to 20 00:37:19.090 INFO: Log level set to 20 00:37:19.090 INFO: Requests: 00:37:19.090 { 00:37:19.090 "jsonrpc": "2.0", 00:37:19.090 "method": "framework_start_init", 00:37:19.090 "id": 1 00:37:19.090 } 00:37:19.090 00:37:19.090 INFO: Requests: 00:37:19.090 { 00:37:19.090 "jsonrpc": "2.0", 00:37:19.090 "method": "framework_start_init", 00:37:19.090 "id": 1 00:37:19.090 } 00:37:19.090 00:37:19.090 [2024-07-25 14:05:15.725611] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:37:19.090 INFO: response: 00:37:19.090 { 00:37:19.090 "jsonrpc": "2.0", 00:37:19.090 "id": 1, 00:37:19.090 "result": true 00:37:19.090 } 00:37:19.090 00:37:19.090 INFO: response: 00:37:19.090 { 00:37:19.090 "jsonrpc": "2.0", 00:37:19.090 "id": 1, 00:37:19.091 "result": true 00:37:19.091 } 00:37:19.091 00:37:19.091 14:05:15 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.091 14:05:15 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:19.091 14:05:15 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.091 14:05:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:19.091 INFO: Setting log level to 40 00:37:19.091 INFO: Setting log level to 40 00:37:19.091 INFO: Setting log level to 40 00:37:19.091 [2024-07-25 14:05:15.739091] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:19.091 14:05:15 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.091 14:05:15 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:37:19.091 14:05:15 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:19.091 14:05:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:19.091 14:05:15 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 00:37:19.091 14:05:15 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.091 14:05:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:22.442 Nvme0n1 00:37:22.442 14:05:18 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.442 14:05:18 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:37:22.442 14:05:18 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.442 14:05:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:22.442 14:05:18 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.442 14:05:18 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:37:22.443 14:05:18 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.443 14:05:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:22.443 14:05:18 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.443 14:05:18 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:22.443 14:05:18 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.443 14:05:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:22.443 [2024-07-25 14:05:18.661712] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:22.443 14:05:18 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.443 14:05:18 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:37:22.443 14:05:18 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.443 14:05:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:22.443 [ 00:37:22.443 { 00:37:22.443 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:37:22.443 "subtype": "Discovery", 00:37:22.443 "listen_addresses": [], 00:37:22.443 "allow_any_host": true, 00:37:22.443 "hosts": [] 00:37:22.443 }, 00:37:22.443 { 00:37:22.443 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:22.443 "subtype": "NVMe", 00:37:22.443 "listen_addresses": [ 00:37:22.443 { 00:37:22.443 "trtype": "TCP", 00:37:22.443 "adrfam": "IPv4", 00:37:22.443 "traddr": "10.0.0.2", 00:37:22.443 "trsvcid": "4420" 00:37:22.443 } 00:37:22.443 ], 00:37:22.443 "allow_any_host": true, 00:37:22.443 "hosts": [], 00:37:22.443 "serial_number": "SPDK00000000000001", 00:37:22.443 "model_number": "SPDK bdev Controller", 00:37:22.443 "max_namespaces": 1, 00:37:22.443 "min_cntlid": 1, 00:37:22.443 "max_cntlid": 65519, 00:37:22.443 "namespaces": [ 00:37:22.443 { 00:37:22.443 "nsid": 1, 00:37:22.443 "bdev_name": "Nvme0n1", 00:37:22.443 "name": "Nvme0n1", 00:37:22.443 "nguid": "B3E07778F1184115927197303DE6DBD1", 00:37:22.443 "uuid": "b3e07778-f118-4115-9271-97303de6dbd1" 00:37:22.443 } 00:37:22.443 ] 00:37:22.443 } 00:37:22.443 ] 00:37:22.443 14:05:18 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.443 14:05:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:22.443 14:05:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:37:22.443 14:05:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:37:22.443 EAL: No free 2048 kB hugepages reported on node 1 00:37:22.443 14:05:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLN916500W71P6AGN 00:37:22.443 14:05:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:22.443 14:05:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:37:22.443 14:05:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:37:22.443 EAL: No free 2048 kB hugepages reported on node 1 00:37:22.443 14:05:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:37:22.443 14:05:18 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLN916500W71P6AGN '!=' BTLN916500W71P6AGN ']' 00:37:22.443 14:05:18 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:37:22.443 14:05:18 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:22.443 14:05:18 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.443 14:05:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:22.443 14:05:18 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.443 14:05:18 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:37:22.443 14:05:18 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:37:22.443 14:05:18 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:22.443 14:05:18 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:37:22.443 14:05:18 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:22.443 14:05:18 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:37:22.443 14:05:18 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:22.443 14:05:18 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:22.443 rmmod nvme_tcp 00:37:22.443 rmmod nvme_fabrics 00:37:22.443 rmmod nvme_keyring 00:37:22.443 14:05:18 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:22.443 14:05:18 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:37:22.443 14:05:18 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:37:22.443 14:05:18 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 525570 ']' 00:37:22.443 14:05:18 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 525570 00:37:22.443 14:05:18 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 525570 ']' 00:37:22.443 14:05:18 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 525570 00:37:22.443 14:05:18 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:37:22.443 14:05:19 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:22.443 14:05:19 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 525570 00:37:22.443 14:05:19 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:22.443 14:05:19 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:22.443 14:05:19 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 525570' 00:37:22.443 killing process with pid 525570 00:37:22.443 14:05:19 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 525570 00:37:22.443 14:05:19 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 525570 00:37:24.351 14:05:21 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:24.351 14:05:21 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:24.351 14:05:21 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:24.351 14:05:21 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:24.351 14:05:21 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:24.351 14:05:21 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:24.351 14:05:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:24.351 14:05:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:26.258 14:05:23 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:26.518 00:37:26.518 real 0m24.998s 00:37:26.518 user 0m32.961s 00:37:26.518 sys 0m6.606s 00:37:26.518 14:05:23 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:26.518 14:05:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:26.518 ************************************ 00:37:26.518 END TEST nvmf_identify_passthru 00:37:26.518 ************************************ 00:37:26.518 14:05:23 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:26.518 14:05:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:26.518 14:05:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:26.518 14:05:23 -- common/autotest_common.sh@10 -- # set +x 00:37:26.518 ************************************ 00:37:26.518 START TEST nvmf_dif 00:37:26.518 ************************************ 00:37:26.518 14:05:23 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:26.518 * Looking for test storage... 00:37:26.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:26.518 14:05:23 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:26.518 14:05:23 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:26.518 14:05:23 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:26.518 14:05:23 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:26.518 14:05:23 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.518 14:05:23 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.518 14:05:23 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.518 14:05:23 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:37:26.518 14:05:23 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:26.518 14:05:23 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:37:26.518 14:05:23 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:37:26.518 14:05:23 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:37:26.518 14:05:23 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:37:26.518 14:05:23 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:26.518 14:05:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:26.518 14:05:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:26.518 14:05:23 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:37:26.518 14:05:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:33.090 14:05:29 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:33.090 14:05:29 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:37:33.090 14:05:29 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:33.090 14:05:29 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:33.090 14:05:29 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:33.090 14:05:29 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:33.091 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:33.091 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:33.091 Found net devices under 0000:af:00.0: cvl_0_0 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:33.091 Found net devices under 0000:af:00.1: cvl_0_1 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:33.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:33.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:37:33.091 00:37:33.091 --- 10.0.0.2 ping statistics --- 00:37:33.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:33.091 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:33.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:33.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:37:33.091 00:37:33.091 --- 10.0.0.1 ping statistics --- 00:37:33.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:33.091 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:37:33.091 14:05:29 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:35.627 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:37:35.627 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:37:35.627 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:37:35.627 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:37:35.627 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:37:35.627 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:37:35.627 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:37:35.627 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:37:35.627 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:37:35.627 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:37:35.627 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:37:35.627 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:37:35.627 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:37:35.627 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:37:35.627 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:37:35.627 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:37:35.627 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:37:35.887 14:05:32 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:35.887 14:05:32 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:35.887 14:05:32 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:35.887 14:05:32 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:35.887 14:05:32 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:35.887 14:05:32 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:35.887 14:05:32 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:37:35.887 14:05:32 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:37:35.887 14:05:32 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:35.887 14:05:32 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:35.887 14:05:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:35.887 14:05:32 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=531347 00:37:35.887 14:05:32 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 531347 00:37:35.888 14:05:32 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:37:35.888 14:05:32 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 531347 ']' 00:37:35.888 14:05:32 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:35.888 14:05:32 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:35.888 14:05:32 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:35.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:35.888 14:05:32 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:35.888 14:05:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:35.888 [2024-07-25 14:05:32.641773] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:37:35.888 [2024-07-25 14:05:32.641825] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:35.888 EAL: No free 2048 kB hugepages reported on node 1 00:37:35.888 [2024-07-25 14:05:32.682302] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:35.888 [2024-07-25 14:05:32.717061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:35.888 [2024-07-25 14:05:32.756160] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:35.888 [2024-07-25 14:05:32.756200] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:35.888 [2024-07-25 14:05:32.756210] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:35.888 [2024-07-25 14:05:32.756219] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:35.888 [2024-07-25 14:05:32.756225] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:35.888 [2024-07-25 14:05:32.756246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:36.825 14:05:33 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:36.825 14:05:33 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:37:36.825 14:05:33 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:36.825 14:05:33 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:36.825 14:05:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:36.825 14:05:33 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:36.825 14:05:33 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:37:36.825 14:05:33 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:37:36.825 14:05:33 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.825 14:05:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:36.825 [2024-07-25 14:05:33.478152] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:36.825 14:05:33 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.825 14:05:33 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:37:36.825 14:05:33 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:36.825 14:05:33 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:36.825 14:05:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:36.825 ************************************ 00:37:36.825 START TEST fio_dif_1_default 00:37:36.825 ************************************ 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:36.825 bdev_null0 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:36.825 [2024-07-25 14:05:33.550455] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:36.825 { 00:37:36.825 "params": { 00:37:36.825 "name": "Nvme$subsystem", 00:37:36.825 "trtype": "$TEST_TRANSPORT", 00:37:36.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:36.825 "adrfam": "ipv4", 00:37:36.825 "trsvcid": "$NVMF_PORT", 00:37:36.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:36.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:36.825 "hdgst": ${hdgst:-false}, 00:37:36.825 "ddgst": ${ddgst:-false} 00:37:36.825 }, 00:37:36.825 "method": "bdev_nvme_attach_controller" 00:37:36.825 } 00:37:36.825 EOF 00:37:36.825 )") 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:36.825 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:36.826 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:36.826 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:36.826 14:05:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:37:36.826 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:36.826 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:36.826 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:37:36.826 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:36.826 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:36.826 14:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:37:36.826 14:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:37:36.826 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:36.826 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:37:36.826 14:05:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:37:36.826 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:36.826 14:05:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:37:36.826 14:05:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:36.826 "params": { 00:37:36.826 "name": "Nvme0", 00:37:36.826 "trtype": "tcp", 00:37:36.826 "traddr": "10.0.0.2", 00:37:36.826 "adrfam": "ipv4", 00:37:36.826 "trsvcid": "4420", 00:37:36.826 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:36.826 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:36.826 "hdgst": false, 00:37:36.826 "ddgst": false 00:37:36.826 }, 00:37:36.826 "method": "bdev_nvme_attach_controller" 00:37:36.826 }' 00:37:36.826 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:36.826 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:36.826 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:36.826 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:36.826 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:36.826 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:36.826 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:36.826 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:36.826 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:36.826 14:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:37.084 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:37.084 fio-3.35 00:37:37.084 Starting 1 thread 00:37:37.084 EAL: No free 2048 kB hugepages reported on node 1 00:37:49.297 00:37:49.297 filename0: (groupid=0, jobs=1): err= 0: pid=531776: Thu Jul 25 14:05:44 2024 00:37:49.297 read: IOPS=96, BW=384KiB/s (394kB/s)(3856KiB/10034msec) 00:37:49.297 slat (nsec): min=3886, max=29780, avg=5848.39, stdev=1324.76 00:37:49.297 clat (usec): min=40882, max=46344, avg=41615.65, stdev=583.04 00:37:49.297 lat (usec): min=40887, max=46356, avg=41621.50, stdev=583.10 00:37:49.297 clat percentiles (usec): 00:37:49.297 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:49.297 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:37:49.297 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:49.297 | 99.00th=[42730], 99.50th=[43254], 99.90th=[46400], 99.95th=[46400], 00:37:49.297 | 99.99th=[46400] 00:37:49.297 bw ( KiB/s): min= 352, max= 416, per=99.92%, avg=384.00, stdev=14.68, samples=20 00:37:49.297 iops : min= 88, max= 104, avg=96.00, stdev= 3.67, samples=20 00:37:49.297 lat (msec) : 50=100.00% 00:37:49.297 cpu : usr=84.79%, sys=14.97%, ctx=14, majf=0, minf=232 00:37:49.297 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:49.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.297 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:49.297 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:49.297 00:37:49.297 Run status group 0 (all jobs): 00:37:49.297 READ: bw=384KiB/s (394kB/s), 384KiB/s-384KiB/s (394kB/s-394kB/s), io=3856KiB (3949kB), run=10034-10034msec 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:49.297 00:37:49.297 real 0m11.145s 00:37:49.297 user 0m16.712s 00:37:49.297 sys 0m1.810s 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:49.297 ************************************ 00:37:49.297 END TEST fio_dif_1_default 00:37:49.297 ************************************ 00:37:49.297 14:05:44 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:37:49.297 14:05:44 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:49.297 14:05:44 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:49.297 14:05:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:49.297 ************************************ 00:37:49.297 START TEST fio_dif_1_multi_subsystems 00:37:49.297 ************************************ 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:49.297 bdev_null0 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:49.297 [2024-07-25 14:05:44.774898] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:49.297 bdev_null1 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:49.297 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:49.298 { 00:37:49.298 "params": { 00:37:49.298 "name": "Nvme$subsystem", 00:37:49.298 "trtype": "$TEST_TRANSPORT", 00:37:49.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:49.298 "adrfam": "ipv4", 00:37:49.298 "trsvcid": "$NVMF_PORT", 00:37:49.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:49.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:49.298 "hdgst": ${hdgst:-false}, 00:37:49.298 "ddgst": ${ddgst:-false} 00:37:49.298 }, 00:37:49.298 "method": "bdev_nvme_attach_controller" 00:37:49.298 } 00:37:49.298 EOF 00:37:49.298 )") 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:49.298 { 00:37:49.298 "params": { 00:37:49.298 "name": "Nvme$subsystem", 00:37:49.298 "trtype": "$TEST_TRANSPORT", 00:37:49.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:49.298 "adrfam": "ipv4", 00:37:49.298 "trsvcid": "$NVMF_PORT", 00:37:49.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:49.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:49.298 "hdgst": ${hdgst:-false}, 00:37:49.298 "ddgst": ${ddgst:-false} 00:37:49.298 }, 00:37:49.298 "method": "bdev_nvme_attach_controller" 00:37:49.298 } 00:37:49.298 EOF 00:37:49.298 )") 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:49.298 "params": { 00:37:49.298 "name": "Nvme0", 00:37:49.298 "trtype": "tcp", 00:37:49.298 "traddr": "10.0.0.2", 00:37:49.298 "adrfam": "ipv4", 00:37:49.298 "trsvcid": "4420", 00:37:49.298 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:49.298 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:49.298 "hdgst": false, 00:37:49.298 "ddgst": false 00:37:49.298 }, 00:37:49.298 "method": "bdev_nvme_attach_controller" 00:37:49.298 },{ 00:37:49.298 "params": { 00:37:49.298 "name": "Nvme1", 00:37:49.298 "trtype": "tcp", 00:37:49.298 "traddr": "10.0.0.2", 00:37:49.298 "adrfam": "ipv4", 00:37:49.298 "trsvcid": "4420", 00:37:49.298 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:49.298 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:49.298 "hdgst": false, 00:37:49.298 "ddgst": false 00:37:49.298 }, 00:37:49.298 "method": "bdev_nvme_attach_controller" 00:37:49.298 }' 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:49.298 14:05:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:49.298 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:49.298 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:49.298 fio-3.35 00:37:49.298 Starting 2 threads 00:37:49.298 EAL: No free 2048 kB hugepages reported on node 1 00:37:59.305 00:37:59.305 filename0: (groupid=0, jobs=1): err= 0: pid=533766: Thu Jul 25 14:05:55 2024 00:37:59.305 read: IOPS=187, BW=749KiB/s (767kB/s)(7520KiB/10037msec) 00:37:59.305 slat (nsec): min=5652, max=26729, avg=6756.05, stdev=1971.29 00:37:59.305 clat (usec): min=764, max=42751, avg=21335.59, stdev=20328.67 00:37:59.305 lat (usec): min=770, max=42777, avg=21342.35, stdev=20328.08 00:37:59.305 clat percentiles (usec): 00:37:59.305 | 1.00th=[ 873], 5.00th=[ 881], 10.00th=[ 889], 20.00th=[ 898], 00:37:59.305 | 30.00th=[ 938], 40.00th=[ 963], 50.00th=[41157], 60.00th=[41157], 00:37:59.305 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:59.305 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:37:59.305 | 99.99th=[42730] 00:37:59.305 bw ( KiB/s): min= 672, max= 768, per=66.27%, avg=750.40, stdev=30.22, samples=20 00:37:59.305 iops : min= 168, max= 192, avg=187.60, stdev= 7.56, samples=20 00:37:59.305 lat (usec) : 1000=43.88% 00:37:59.305 lat (msec) : 2=5.90%, 50=50.21% 00:37:59.305 cpu : usr=93.31%, sys=6.46%, ctx=13, majf=0, minf=48 00:37:59.305 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:59.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.305 issued rwts: total=1880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.305 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:59.305 filename1: (groupid=0, jobs=1): err= 0: pid=533767: Thu Jul 25 14:05:55 2024 00:37:59.305 read: IOPS=95, BW=383KiB/s (393kB/s)(3840KiB/10016msec) 00:37:59.305 slat (nsec): min=5642, max=34631, avg=7457.14, stdev=2768.89 00:37:59.305 clat (usec): min=40828, max=43004, avg=41710.33, stdev=472.84 00:37:59.305 lat (usec): min=40833, max=43015, avg=41717.79, stdev=473.06 00:37:59.305 clat percentiles (usec): 00:37:59.305 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:59.305 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:37:59.305 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:59.305 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:37:59.305 | 99.99th=[43254] 00:37:59.305 bw ( KiB/s): min= 352, max= 416, per=33.75%, avg=382.40, stdev=12.61, samples=20 00:37:59.305 iops : min= 88, max= 104, avg=95.60, stdev= 3.15, samples=20 00:37:59.305 lat (msec) : 50=100.00% 00:37:59.305 cpu : usr=93.61%, sys=6.16%, ctx=8, majf=0, minf=166 00:37:59.305 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:59.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.305 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.305 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:59.305 00:37:59.305 Run status group 0 (all jobs): 00:37:59.305 READ: bw=1132KiB/s (1159kB/s), 383KiB/s-749KiB/s (393kB/s-767kB/s), io=11.1MiB (11.6MB), run=10016-10037msec 00:37:59.305 14:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:37:59.305 14:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:37:59.305 14:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:59.305 14:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:59.305 14:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:37:59.305 14:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:59.305 14:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.305 14:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:59.305 14:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.305 14:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:59.305 14:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.305 14:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:59.305 14:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.305 14:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:59.305 14:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:59.305 14:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:37:59.305 14:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:59.305 14:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.305 14:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:59.305 14:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.305 14:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:59.305 14:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.305 14:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:59.305 14:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.305 00:37:59.305 real 0m11.424s 00:37:59.305 user 0m27.921s 00:37:59.305 sys 0m1.629s 00:37:59.305 14:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:59.305 14:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:59.305 ************************************ 00:37:59.305 END TEST fio_dif_1_multi_subsystems 00:37:59.305 ************************************ 00:37:59.565 14:05:56 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:37:59.565 14:05:56 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:59.565 14:05:56 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:59.565 14:05:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:59.565 ************************************ 00:37:59.565 START TEST fio_dif_rand_params 00:37:59.565 ************************************ 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:59.565 bdev_null0 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:59.565 [2024-07-25 14:05:56.281307] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:59.565 { 00:37:59.565 "params": { 00:37:59.565 "name": "Nvme$subsystem", 00:37:59.565 "trtype": "$TEST_TRANSPORT", 00:37:59.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:59.565 "adrfam": "ipv4", 00:37:59.565 "trsvcid": "$NVMF_PORT", 00:37:59.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:59.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:59.565 "hdgst": ${hdgst:-false}, 00:37:59.565 "ddgst": ${ddgst:-false} 00:37:59.565 }, 00:37:59.565 "method": "bdev_nvme_attach_controller" 00:37:59.565 } 00:37:59.565 EOF 00:37:59.565 )") 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:59.565 14:05:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:59.566 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:59.566 14:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:59.566 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:59.566 14:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:59.566 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:59.566 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:59.566 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:37:59.566 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:59.566 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:59.566 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:59.566 14:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:59.566 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:37:59.566 14:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:59.566 14:05:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:37:59.566 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:59.566 14:05:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:37:59.566 14:05:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:59.566 "params": { 00:37:59.566 "name": "Nvme0", 00:37:59.566 "trtype": "tcp", 00:37:59.566 "traddr": "10.0.0.2", 00:37:59.566 "adrfam": "ipv4", 00:37:59.566 "trsvcid": "4420", 00:37:59.566 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:59.566 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:59.566 "hdgst": false, 00:37:59.566 "ddgst": false 00:37:59.566 }, 00:37:59.566 "method": "bdev_nvme_attach_controller" 00:37:59.566 }' 00:37:59.566 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:59.566 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:59.566 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:59.566 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:59.566 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:59.566 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:59.566 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:59.566 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:59.566 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:59.566 14:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:59.826 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:59.826 ... 00:37:59.826 fio-3.35 00:37:59.826 Starting 3 threads 00:38:00.085 EAL: No free 2048 kB hugepages reported on node 1 00:38:06.714 00:38:06.714 filename0: (groupid=0, jobs=1): err= 0: pid=535760: Thu Jul 25 14:06:02 2024 00:38:06.714 read: IOPS=253, BW=31.6MiB/s (33.2MB/s)(160MiB/5046msec) 00:38:06.714 slat (nsec): min=5880, max=71277, avg=9314.19, stdev=3171.48 00:38:06.714 clat (usec): min=3718, max=53034, avg=11806.49, stdev=12929.04 00:38:06.714 lat (usec): min=3725, max=53044, avg=11815.80, stdev=12929.44 00:38:06.714 clat percentiles (usec): 00:38:06.714 | 1.00th=[ 4146], 5.00th=[ 4555], 10.00th=[ 5014], 20.00th=[ 6063], 00:38:06.714 | 30.00th=[ 6718], 40.00th=[ 7177], 50.00th=[ 7635], 60.00th=[ 8225], 00:38:06.714 | 70.00th=[ 8848], 80.00th=[ 9634], 90.00th=[46400], 95.00th=[49546], 00:38:06.714 | 99.00th=[51643], 99.50th=[51643], 99.90th=[52167], 99.95th=[53216], 00:38:06.714 | 99.99th=[53216] 00:38:06.714 bw ( KiB/s): min=16128, max=44800, per=33.03%, avg=32640.00, stdev=7602.09, samples=10 00:38:06.714 iops : min= 126, max= 350, avg=255.00, stdev=59.39, samples=10 00:38:06.714 lat (msec) : 4=0.31%, 10=82.54%, 20=6.66%, 50=6.73%, 100=3.76% 00:38:06.714 cpu : usr=91.54%, sys=8.15%, ctx=7, majf=0, minf=31 00:38:06.714 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:06.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.714 issued rwts: total=1277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.714 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:06.714 filename0: (groupid=0, jobs=1): err= 0: pid=535761: Thu Jul 25 14:06:02 2024 00:38:06.714 read: IOPS=267, BW=33.4MiB/s (35.0MB/s)(169MiB/5046msec) 00:38:06.714 slat (nsec): min=5863, max=30599, avg=9371.79, stdev=2663.81 00:38:06.714 clat (usec): min=3677, max=90764, avg=11175.61, stdev=12469.01 00:38:06.714 lat (usec): min=3685, max=90776, avg=11184.98, stdev=12469.18 00:38:06.714 clat percentiles (usec): 00:38:06.714 | 1.00th=[ 4113], 5.00th=[ 4621], 10.00th=[ 5211], 20.00th=[ 6194], 00:38:06.714 | 30.00th=[ 6652], 40.00th=[ 6980], 50.00th=[ 7373], 60.00th=[ 7963], 00:38:06.714 | 70.00th=[ 8717], 80.00th=[ 9765], 90.00th=[11207], 95.00th=[49546], 00:38:06.714 | 99.00th=[51643], 99.50th=[53740], 99.90th=[89654], 99.95th=[90702], 00:38:06.714 | 99.99th=[90702] 00:38:06.714 bw ( KiB/s): min=17920, max=58880, per=35.49%, avg=35072.00, stdev=11849.51, samples=9 00:38:06.714 iops : min= 140, max= 460, avg=274.00, stdev=92.57, samples=9 00:38:06.714 lat (msec) : 4=0.15%, 10=81.99%, 20=9.27%, 50=4.60%, 100=4.00% 00:38:06.714 cpu : usr=91.40%, sys=8.29%, ctx=9, majf=0, minf=110 00:38:06.714 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:06.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.714 issued rwts: total=1349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.714 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:06.714 filename0: (groupid=0, jobs=1): err= 0: pid=535762: Thu Jul 25 14:06:02 2024 00:38:06.714 read: IOPS=253, BW=31.7MiB/s (33.3MB/s)(159MiB/5003msec) 00:38:06.714 slat (nsec): min=5857, max=25888, avg=8908.09, stdev=2728.33 00:38:06.714 clat (usec): min=3923, max=92187, avg=11803.14, stdev=13205.74 00:38:06.714 lat (usec): min=3929, max=92199, avg=11812.04, stdev=13205.95 00:38:06.714 clat percentiles (usec): 00:38:06.714 | 1.00th=[ 4228], 5.00th=[ 4752], 10.00th=[ 5276], 20.00th=[ 6063], 00:38:06.714 | 30.00th=[ 6718], 40.00th=[ 7177], 50.00th=[ 7767], 60.00th=[ 8291], 00:38:06.714 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[12125], 95.00th=[49546], 00:38:06.714 | 99.00th=[51643], 99.50th=[52167], 99.90th=[91751], 99.95th=[91751], 00:38:06.714 | 99.99th=[91751] 00:38:06.714 bw ( KiB/s): min=20480, max=43776, per=32.72%, avg=32341.33, stdev=6428.10, samples=9 00:38:06.714 iops : min= 160, max= 342, avg=252.67, stdev=50.22, samples=9 00:38:06.714 lat (msec) : 4=0.16%, 10=82.20%, 20=7.80%, 50=6.30%, 100=3.54% 00:38:06.714 cpu : usr=91.46%, sys=8.22%, ctx=11, majf=0, minf=132 00:38:06.714 IO depths : 1=1.9%, 2=98.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:06.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.714 issued rwts: total=1270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.714 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:06.714 00:38:06.714 Run status group 0 (all jobs): 00:38:06.714 READ: bw=96.5MiB/s (101MB/s), 31.6MiB/s-33.4MiB/s (33.2MB/s-35.0MB/s), io=487MiB (511MB), run=5003-5046msec 00:38:06.714 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:38:06.714 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:06.714 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:06.714 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:06.714 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:06.714 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:06.714 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.714 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.714 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.714 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:06.714 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.714 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.714 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.714 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:38:06.714 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:38:06.714 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:38:06.714 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:38:06.714 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:38:06.714 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:38:06.714 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:38:06.714 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:06.714 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:06.714 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:06.714 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:06.714 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:38:06.714 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.714 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.714 bdev_null0 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.715 [2024-07-25 14:06:02.497183] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.715 bdev_null1 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.715 bdev_null2 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:06.715 { 00:38:06.715 "params": { 00:38:06.715 "name": "Nvme$subsystem", 00:38:06.715 "trtype": "$TEST_TRANSPORT", 00:38:06.715 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:06.715 "adrfam": "ipv4", 00:38:06.715 "trsvcid": "$NVMF_PORT", 00:38:06.715 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:06.715 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:06.715 "hdgst": ${hdgst:-false}, 00:38:06.715 "ddgst": ${ddgst:-false} 00:38:06.715 }, 00:38:06.715 "method": "bdev_nvme_attach_controller" 00:38:06.715 } 00:38:06.715 EOF 00:38:06.715 )") 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:06.715 { 00:38:06.715 "params": { 00:38:06.715 "name": "Nvme$subsystem", 00:38:06.715 "trtype": "$TEST_TRANSPORT", 00:38:06.715 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:06.715 "adrfam": "ipv4", 00:38:06.715 "trsvcid": "$NVMF_PORT", 00:38:06.715 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:06.715 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:06.715 "hdgst": ${hdgst:-false}, 00:38:06.715 "ddgst": ${ddgst:-false} 00:38:06.715 }, 00:38:06.715 "method": "bdev_nvme_attach_controller" 00:38:06.715 } 00:38:06.715 EOF 00:38:06.715 )") 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:06.715 { 00:38:06.715 "params": { 00:38:06.715 "name": "Nvme$subsystem", 00:38:06.715 "trtype": "$TEST_TRANSPORT", 00:38:06.715 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:06.715 "adrfam": "ipv4", 00:38:06.715 "trsvcid": "$NVMF_PORT", 00:38:06.715 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:06.715 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:06.715 "hdgst": ${hdgst:-false}, 00:38:06.715 "ddgst": ${ddgst:-false} 00:38:06.715 }, 00:38:06.715 "method": "bdev_nvme_attach_controller" 00:38:06.715 } 00:38:06.715 EOF 00:38:06.715 )") 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:06.715 "params": { 00:38:06.715 "name": "Nvme0", 00:38:06.715 "trtype": "tcp", 00:38:06.715 "traddr": "10.0.0.2", 00:38:06.715 "adrfam": "ipv4", 00:38:06.715 "trsvcid": "4420", 00:38:06.715 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:06.715 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:06.715 "hdgst": false, 00:38:06.715 "ddgst": false 00:38:06.715 }, 00:38:06.715 "method": "bdev_nvme_attach_controller" 00:38:06.715 },{ 00:38:06.715 "params": { 00:38:06.715 "name": "Nvme1", 00:38:06.715 "trtype": "tcp", 00:38:06.715 "traddr": "10.0.0.2", 00:38:06.715 "adrfam": "ipv4", 00:38:06.715 "trsvcid": "4420", 00:38:06.715 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:06.715 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:06.715 "hdgst": false, 00:38:06.715 "ddgst": false 00:38:06.715 }, 00:38:06.715 "method": "bdev_nvme_attach_controller" 00:38:06.715 },{ 00:38:06.715 "params": { 00:38:06.715 "name": "Nvme2", 00:38:06.715 "trtype": "tcp", 00:38:06.715 "traddr": "10.0.0.2", 00:38:06.715 "adrfam": "ipv4", 00:38:06.715 "trsvcid": "4420", 00:38:06.715 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:38:06.715 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:38:06.715 "hdgst": false, 00:38:06.715 "ddgst": false 00:38:06.715 }, 00:38:06.715 "method": "bdev_nvme_attach_controller" 00:38:06.715 }' 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:06.715 14:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:06.715 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:06.715 ... 00:38:06.715 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:06.716 ... 00:38:06.716 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:06.716 ... 00:38:06.716 fio-3.35 00:38:06.716 Starting 24 threads 00:38:06.716 EAL: No free 2048 kB hugepages reported on node 1 00:38:18.930 00:38:18.930 filename0: (groupid=0, jobs=1): err= 0: pid=537087: Thu Jul 25 14:06:13 2024 00:38:18.930 read: IOPS=634, BW=2537KiB/s (2598kB/s)(24.8MiB/10022msec) 00:38:18.930 slat (nsec): min=6235, max=82598, avg=19953.06, stdev=12780.66 00:38:18.930 clat (usec): min=4238, max=45468, avg=25072.75, stdev=3245.94 00:38:18.930 lat (usec): min=4246, max=45475, avg=25092.70, stdev=3247.82 00:38:18.930 clat percentiles (usec): 00:38:18.930 | 1.00th=[12518], 5.00th=[19268], 10.00th=[23987], 20.00th=[24773], 00:38:18.930 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:38:18.930 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26084], 95.00th=[26870], 00:38:18.930 | 99.00th=[38011], 99.50th=[40109], 99.90th=[43254], 99.95th=[44303], 00:38:18.930 | 99.99th=[45351] 00:38:18.930 bw ( KiB/s): min= 2384, max= 2960, per=4.27%, avg=2535.85, stdev=122.52, samples=20 00:38:18.930 iops : min= 596, max= 740, avg=633.95, stdev=30.63, samples=20 00:38:18.930 lat (msec) : 10=0.77%, 20=4.47%, 50=94.76% 00:38:18.930 cpu : usr=96.94%, sys=2.63%, ctx=20, majf=0, minf=44 00:38:18.930 IO depths : 1=4.8%, 2=9.7%, 4=21.6%, 8=55.9%, 16=8.0%, 32=0.0%, >=64=0.0% 00:38:18.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.930 complete : 0=0.0%, 4=93.5%, 8=1.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.930 issued rwts: total=6357,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:18.930 filename0: (groupid=0, jobs=1): err= 0: pid=537088: Thu Jul 25 14:06:13 2024 00:38:18.930 read: IOPS=628, BW=2512KiB/s (2573kB/s)(24.6MiB/10022msec) 00:38:18.930 slat (nsec): min=6270, max=74588, avg=17198.97, stdev=10927.09 00:38:18.930 clat (usec): min=7612, max=45102, avg=25349.05, stdev=2892.10 00:38:18.930 lat (usec): min=7622, max=45113, avg=25366.25, stdev=2892.58 00:38:18.930 clat percentiles (usec): 00:38:18.930 | 1.00th=[12911], 5.00th=[22938], 10.00th=[24249], 20.00th=[24773], 00:38:18.930 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25560], 60.00th=[25560], 00:38:18.930 | 70.00th=[25822], 80.00th=[26084], 90.00th=[26346], 95.00th=[27132], 00:38:18.930 | 99.00th=[36963], 99.50th=[39060], 99.90th=[43779], 99.95th=[44827], 00:38:18.930 | 99.99th=[45351] 00:38:18.930 bw ( KiB/s): min= 2408, max= 2704, per=4.23%, avg=2511.05, stdev=72.35, samples=20 00:38:18.930 iops : min= 602, max= 676, avg=627.75, stdev=18.08, samples=20 00:38:18.930 lat (msec) : 10=0.32%, 20=2.99%, 50=96.70% 00:38:18.930 cpu : usr=97.00%, sys=2.64%, ctx=17, majf=0, minf=36 00:38:18.930 IO depths : 1=3.4%, 2=6.9%, 4=18.0%, 8=62.1%, 16=9.7%, 32=0.0%, >=64=0.0% 00:38:18.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.930 complete : 0=0.0%, 4=92.7%, 8=2.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.930 issued rwts: total=6295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:18.930 filename0: (groupid=0, jobs=1): err= 0: pid=537089: Thu Jul 25 14:06:13 2024 00:38:18.930 read: IOPS=623, BW=2492KiB/s (2552kB/s)(24.4MiB/10016msec) 00:38:18.930 slat (nsec): min=6103, max=83563, avg=27933.69, stdev=13094.68 00:38:18.930 clat (usec): min=18970, max=46756, avg=25447.73, stdev=1159.51 00:38:18.930 lat (usec): min=18986, max=46773, avg=25475.66, stdev=1157.75 00:38:18.930 clat percentiles (usec): 00:38:18.930 | 1.00th=[23462], 5.00th=[24249], 10.00th=[24511], 20.00th=[25035], 00:38:18.930 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:38:18.930 | 70.00th=[25822], 80.00th=[25822], 90.00th=[26084], 95.00th=[26346], 00:38:18.930 | 99.00th=[28443], 99.50th=[31327], 99.90th=[39584], 99.95th=[46400], 00:38:18.930 | 99.99th=[46924] 00:38:18.930 bw ( KiB/s): min= 2432, max= 2560, per=4.19%, avg=2489.60, stdev=58.82, samples=20 00:38:18.930 iops : min= 608, max= 640, avg=622.40, stdev=14.71, samples=20 00:38:18.930 lat (msec) : 20=0.08%, 50=99.92% 00:38:18.930 cpu : usr=97.58%, sys=2.05%, ctx=71, majf=0, minf=32 00:38:18.930 IO depths : 1=5.4%, 2=10.7%, 4=22.1%, 8=54.4%, 16=7.5%, 32=0.0%, >=64=0.0% 00:38:18.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.930 complete : 0=0.0%, 4=93.4%, 8=1.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.930 issued rwts: total=6240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:18.930 filename0: (groupid=0, jobs=1): err= 0: pid=537090: Thu Jul 25 14:06:13 2024 00:38:18.930 read: IOPS=617, BW=2468KiB/s (2527kB/s)(24.1MiB/10009msec) 00:38:18.930 slat (nsec): min=5557, max=80006, avg=23724.81, stdev=12294.78 00:38:18.930 clat (usec): min=10347, max=47024, avg=25750.17, stdev=2997.95 00:38:18.930 lat (usec): min=10360, max=47064, avg=25773.89, stdev=2997.67 00:38:18.930 clat percentiles (usec): 00:38:18.930 | 1.00th=[16319], 5.00th=[23725], 10.00th=[24249], 20.00th=[24773], 00:38:18.930 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25560], 60.00th=[25560], 00:38:18.930 | 70.00th=[25822], 80.00th=[26084], 90.00th=[26608], 95.00th=[30016], 00:38:18.930 | 99.00th=[39584], 99.50th=[41681], 99.90th=[45351], 99.95th=[46924], 00:38:18.930 | 99.99th=[46924] 00:38:18.930 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2465.68, stdev=77.86, samples=19 00:38:18.930 iops : min= 576, max= 640, avg=616.42, stdev=19.47, samples=19 00:38:18.930 lat (msec) : 20=2.30%, 50=97.70% 00:38:18.930 cpu : usr=97.12%, sys=2.50%, ctx=21, majf=0, minf=35 00:38:18.930 IO depths : 1=3.1%, 2=6.2%, 4=17.3%, 8=63.3%, 16=10.1%, 32=0.0%, >=64=0.0% 00:38:18.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.930 complete : 0=0.0%, 4=92.5%, 8=2.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.930 issued rwts: total=6176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:18.930 filename0: (groupid=0, jobs=1): err= 0: pid=537091: Thu Jul 25 14:06:13 2024 00:38:18.930 read: IOPS=623, BW=2492KiB/s (2552kB/s)(24.4MiB/10016msec) 00:38:18.930 slat (nsec): min=4711, max=85603, avg=29383.19, stdev=12376.28 00:38:18.930 clat (usec): min=15717, max=40384, avg=25415.01, stdev=1046.12 00:38:18.930 lat (usec): min=15724, max=40397, avg=25444.39, stdev=1044.85 00:38:18.930 clat percentiles (usec): 00:38:18.930 | 1.00th=[23725], 5.00th=[24249], 10.00th=[24511], 20.00th=[25035], 00:38:18.930 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:38:18.930 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26084], 95.00th=[26346], 00:38:18.930 | 99.00th=[27132], 99.50th=[28967], 99.90th=[40109], 99.95th=[40109], 00:38:18.930 | 99.99th=[40633] 00:38:18.930 bw ( KiB/s): min= 2432, max= 2560, per=4.19%, avg=2489.60, stdev=65.33, samples=20 00:38:18.930 iops : min= 608, max= 640, avg=622.40, stdev=16.33, samples=20 00:38:18.930 lat (msec) : 20=0.06%, 50=99.94% 00:38:18.930 cpu : usr=97.21%, sys=2.41%, ctx=28, majf=0, minf=36 00:38:18.930 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:18.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.930 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.930 issued rwts: total=6240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:18.930 filename0: (groupid=0, jobs=1): err= 0: pid=537092: Thu Jul 25 14:06:13 2024 00:38:18.930 read: IOPS=618, BW=2474KiB/s (2533kB/s)(24.2MiB/10002msec) 00:38:18.930 slat (nsec): min=6210, max=71682, avg=28080.42, stdev=11795.26 00:38:18.930 clat (usec): min=11859, max=44098, avg=25613.35, stdev=1832.82 00:38:18.930 lat (usec): min=11867, max=44115, avg=25641.43, stdev=1831.44 00:38:18.930 clat percentiles (usec): 00:38:18.930 | 1.00th=[23725], 5.00th=[24249], 10.00th=[24773], 20.00th=[25035], 00:38:18.930 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:38:18.930 | 70.00th=[25822], 80.00th=[25822], 90.00th=[26084], 95.00th=[26608], 00:38:18.930 | 99.00th=[33817], 99.50th=[38011], 99.90th=[44303], 99.95th=[44303], 00:38:18.930 | 99.99th=[44303] 00:38:18.930 bw ( KiB/s): min= 2212, max= 2560, per=4.16%, avg=2470.11, stdev=91.94, samples=19 00:38:18.930 iops : min= 553, max= 640, avg=617.53, stdev=22.98, samples=19 00:38:18.930 lat (msec) : 20=0.19%, 50=99.81% 00:38:18.930 cpu : usr=97.11%, sys=2.52%, ctx=21, majf=0, minf=29 00:38:18.930 IO depths : 1=5.8%, 2=11.6%, 4=24.1%, 8=51.7%, 16=6.8%, 32=0.0%, >=64=0.0% 00:38:18.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.930 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.930 issued rwts: total=6186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:18.930 filename0: (groupid=0, jobs=1): err= 0: pid=537093: Thu Jul 25 14:06:13 2024 00:38:18.930 read: IOPS=622, BW=2492KiB/s (2552kB/s)(24.4MiB/10023msec) 00:38:18.930 slat (usec): min=6, max=103, avg=26.39, stdev=13.84 00:38:18.930 clat (usec): min=12892, max=43526, avg=25470.25, stdev=2323.62 00:38:18.930 lat (usec): min=12917, max=43549, avg=25496.64, stdev=2323.84 00:38:18.930 clat percentiles (usec): 00:38:18.930 | 1.00th=[16057], 5.00th=[23725], 10.00th=[24511], 20.00th=[25035], 00:38:18.930 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25560], 60.00th=[25560], 00:38:18.930 | 70.00th=[25822], 80.00th=[25822], 90.00th=[26346], 95.00th=[26870], 00:38:18.930 | 99.00th=[35390], 99.50th=[39584], 99.90th=[42730], 99.95th=[43254], 00:38:18.930 | 99.99th=[43779] 00:38:18.930 bw ( KiB/s): min= 2432, max= 2560, per=4.19%, avg=2490.65, stdev=58.35, samples=20 00:38:18.930 iops : min= 608, max= 640, avg=622.65, stdev=14.58, samples=20 00:38:18.930 lat (msec) : 20=2.16%, 50=97.84% 00:38:18.930 cpu : usr=97.23%, sys=2.38%, ctx=19, majf=0, minf=29 00:38:18.931 IO depths : 1=4.7%, 2=9.6%, 4=22.1%, 8=55.5%, 16=8.0%, 32=0.0%, >=64=0.0% 00:38:18.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.931 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.931 issued rwts: total=6244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:18.931 filename0: (groupid=0, jobs=1): err= 0: pid=537094: Thu Jul 25 14:06:13 2024 00:38:18.931 read: IOPS=625, BW=2500KiB/s (2561kB/s)(24.5MiB/10022msec) 00:38:18.931 slat (nsec): min=5377, max=81994, avg=24253.45, stdev=12419.40 00:38:18.931 clat (usec): min=7157, max=41686, avg=25396.70, stdev=1614.01 00:38:18.931 lat (usec): min=7172, max=41693, avg=25420.95, stdev=1613.97 00:38:18.931 clat percentiles (usec): 00:38:18.931 | 1.00th=[19268], 5.00th=[24249], 10.00th=[24511], 20.00th=[25035], 00:38:18.931 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25560], 60.00th=[25560], 00:38:18.931 | 70.00th=[25822], 80.00th=[25822], 90.00th=[26084], 95.00th=[26608], 00:38:18.931 | 99.00th=[31851], 99.50th=[33162], 99.90th=[39584], 99.95th=[41681], 00:38:18.931 | 99.99th=[41681] 00:38:18.931 bw ( KiB/s): min= 2432, max= 2656, per=4.21%, avg=2499.05, stdev=70.37, samples=20 00:38:18.931 iops : min= 608, max= 664, avg=624.75, stdev=17.58, samples=20 00:38:18.931 lat (msec) : 10=0.11%, 20=0.94%, 50=98.95% 00:38:18.931 cpu : usr=96.58%, sys=3.02%, ctx=25, majf=0, minf=36 00:38:18.931 IO depths : 1=5.8%, 2=11.6%, 4=24.0%, 8=51.8%, 16=6.8%, 32=0.0%, >=64=0.0% 00:38:18.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.931 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.931 issued rwts: total=6265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:18.931 filename1: (groupid=0, jobs=1): err= 0: pid=537095: Thu Jul 25 14:06:13 2024 00:38:18.931 read: IOPS=590, BW=2361KiB/s (2418kB/s)(23.1MiB/10001msec) 00:38:18.931 slat (nsec): min=4335, max=84318, avg=22052.51, stdev=13089.13 00:38:18.931 clat (usec): min=9902, max=50249, avg=26975.43, stdev=4317.69 00:38:18.931 lat (usec): min=9920, max=50264, avg=26997.49, stdev=4315.50 00:38:18.931 clat percentiles (usec): 00:38:18.931 | 1.00th=[17695], 5.00th=[23987], 10.00th=[24511], 20.00th=[25035], 00:38:18.931 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25560], 60.00th=[25822], 00:38:18.931 | 70.00th=[26084], 80.00th=[27395], 90.00th=[33162], 95.00th=[37487], 00:38:18.931 | 99.00th=[43254], 99.50th=[44303], 99.90th=[48497], 99.95th=[50070], 00:38:18.931 | 99.99th=[50070] 00:38:18.931 bw ( KiB/s): min= 2144, max= 2560, per=3.96%, avg=2354.95, stdev=106.17, samples=19 00:38:18.931 iops : min= 536, max= 640, avg=588.74, stdev=26.54, samples=19 00:38:18.931 lat (msec) : 10=0.02%, 20=1.64%, 50=98.29%, 100=0.05% 00:38:18.931 cpu : usr=97.50%, sys=2.15%, ctx=21, majf=0, minf=24 00:38:18.931 IO depths : 1=1.1%, 2=2.3%, 4=11.7%, 8=71.2%, 16=13.7%, 32=0.0%, >=64=0.0% 00:38:18.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.931 complete : 0=0.0%, 4=91.6%, 8=4.9%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.931 issued rwts: total=5903,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:18.931 filename1: (groupid=0, jobs=1): err= 0: pid=537096: Thu Jul 25 14:06:13 2024 00:38:18.931 read: IOPS=631, BW=2526KiB/s (2586kB/s)(24.7MiB/10022msec) 00:38:18.931 slat (nsec): min=6208, max=91636, avg=20316.64, stdev=11642.12 00:38:18.931 clat (usec): min=6303, max=45068, avg=25177.54, stdev=2595.01 00:38:18.931 lat (usec): min=6320, max=45086, avg=25197.85, stdev=2595.78 00:38:18.931 clat percentiles (usec): 00:38:18.931 | 1.00th=[12649], 5.00th=[23200], 10.00th=[24511], 20.00th=[24773], 00:38:18.931 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:38:18.931 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26084], 95.00th=[26608], 00:38:18.931 | 99.00th=[29492], 99.50th=[39584], 99.90th=[44303], 99.95th=[44827], 00:38:18.931 | 99.99th=[44827] 00:38:18.931 bw ( KiB/s): min= 2422, max= 2864, per=4.25%, avg=2524.30, stdev=109.68, samples=20 00:38:18.931 iops : min= 605, max= 716, avg=631.05, stdev=27.44, samples=20 00:38:18.931 lat (msec) : 10=0.79%, 20=1.74%, 50=97.47% 00:38:18.931 cpu : usr=97.23%, sys=2.40%, ctx=19, majf=0, minf=41 00:38:18.931 IO depths : 1=5.2%, 2=10.6%, 4=22.5%, 8=54.3%, 16=7.4%, 32=0.0%, >=64=0.0% 00:38:18.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.931 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.931 issued rwts: total=6328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:18.931 filename1: (groupid=0, jobs=1): err= 0: pid=537097: Thu Jul 25 14:06:13 2024 00:38:18.931 read: IOPS=624, BW=2499KiB/s (2559kB/s)(24.4MiB/10014msec) 00:38:18.931 slat (nsec): min=6255, max=85384, avg=29381.65, stdev=12110.65 00:38:18.931 clat (usec): min=12649, max=52628, avg=25354.36, stdev=1495.61 00:38:18.931 lat (usec): min=12664, max=52653, avg=25383.74, stdev=1496.36 00:38:18.931 clat percentiles (usec): 00:38:18.931 | 1.00th=[19268], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:38:18.931 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:38:18.931 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26084], 95.00th=[26346], 00:38:18.931 | 99.00th=[27395], 99.50th=[33424], 99.90th=[38011], 99.95th=[38011], 00:38:18.931 | 99.99th=[52691] 00:38:18.931 bw ( KiB/s): min= 2432, max= 2560, per=4.21%, avg=2499.37, stdev=62.33, samples=19 00:38:18.931 iops : min= 608, max= 640, avg=624.84, stdev=15.58, samples=19 00:38:18.931 lat (msec) : 20=1.04%, 50=98.91%, 100=0.05% 00:38:18.931 cpu : usr=97.52%, sys=2.14%, ctx=20, majf=0, minf=23 00:38:18.931 IO depths : 1=6.0%, 2=12.0%, 4=24.4%, 8=51.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:38:18.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.931 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.931 issued rwts: total=6256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:18.931 filename1: (groupid=0, jobs=1): err= 0: pid=537098: Thu Jul 25 14:06:13 2024 00:38:18.931 read: IOPS=609, BW=2438KiB/s (2496kB/s)(23.8MiB/10005msec) 00:38:18.931 slat (nsec): min=5246, max=76997, avg=22184.63, stdev=12510.89 00:38:18.931 clat (usec): min=5463, max=61528, avg=26080.12, stdev=3994.45 00:38:18.931 lat (usec): min=5474, max=61543, avg=26102.30, stdev=3994.08 00:38:18.931 clat percentiles (usec): 00:38:18.931 | 1.00th=[15401], 5.00th=[23462], 10.00th=[24511], 20.00th=[25035], 00:38:18.931 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25560], 60.00th=[25560], 00:38:18.931 | 70.00th=[25822], 80.00th=[26084], 90.00th=[29230], 95.00th=[33817], 00:38:18.931 | 99.00th=[42730], 99.50th=[43779], 99.90th=[47973], 99.95th=[61604], 00:38:18.931 | 99.99th=[61604] 00:38:18.931 bw ( KiB/s): min= 1755, max= 2560, per=4.08%, avg=2425.84, stdev=182.88, samples=19 00:38:18.931 iops : min= 438, max= 640, avg=606.42, stdev=45.87, samples=19 00:38:18.931 lat (msec) : 10=0.48%, 20=2.87%, 50=96.57%, 100=0.08% 00:38:18.931 cpu : usr=97.55%, sys=2.09%, ctx=16, majf=0, minf=32 00:38:18.931 IO depths : 1=3.3%, 2=6.6%, 4=16.8%, 8=62.7%, 16=10.5%, 32=0.0%, >=64=0.0% 00:38:18.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.931 complete : 0=0.0%, 4=92.4%, 8=3.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.931 issued rwts: total=6097,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:18.931 filename1: (groupid=0, jobs=1): err= 0: pid=537099: Thu Jul 25 14:06:13 2024 00:38:18.931 read: IOPS=627, BW=2510KiB/s (2571kB/s)(24.6MiB/10019msec) 00:38:18.931 slat (nsec): min=6240, max=82428, avg=20309.68, stdev=11214.43 00:38:18.931 clat (usec): min=10160, max=46399, avg=25338.31, stdev=2353.58 00:38:18.931 lat (usec): min=10173, max=46406, avg=25358.62, stdev=2354.13 00:38:18.931 clat percentiles (usec): 00:38:18.931 | 1.00th=[15139], 5.00th=[23462], 10.00th=[24511], 20.00th=[24773], 00:38:18.931 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25560], 60.00th=[25560], 00:38:18.931 | 70.00th=[25822], 80.00th=[25822], 90.00th=[26084], 95.00th=[26608], 00:38:18.931 | 99.00th=[36439], 99.50th=[36963], 99.90th=[40109], 99.95th=[45876], 00:38:18.931 | 99.99th=[46400] 00:38:18.931 bw ( KiB/s): min= 2422, max= 2688, per=4.22%, avg=2508.30, stdev=84.19, samples=20 00:38:18.931 iops : min= 605, max= 672, avg=627.05, stdev=21.07, samples=20 00:38:18.931 lat (msec) : 20=2.29%, 50=97.71% 00:38:18.931 cpu : usr=97.55%, sys=2.08%, ctx=17, majf=0, minf=41 00:38:18.931 IO depths : 1=3.9%, 2=8.1%, 4=22.4%, 8=56.8%, 16=8.8%, 32=0.0%, >=64=0.0% 00:38:18.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.931 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.931 issued rwts: total=6288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:18.931 filename1: (groupid=0, jobs=1): err= 0: pid=537100: Thu Jul 25 14:06:13 2024 00:38:18.931 read: IOPS=614, BW=2456KiB/s (2515kB/s)(24.0MiB/10007msec) 00:38:18.931 slat (nsec): min=5471, max=78601, avg=19881.40, stdev=13047.68 00:38:18.931 clat (usec): min=10208, max=50132, avg=25927.05, stdev=3777.35 00:38:18.931 lat (usec): min=10215, max=50174, avg=25946.93, stdev=3776.99 00:38:18.931 clat percentiles (usec): 00:38:18.931 | 1.00th=[15139], 5.00th=[21365], 10.00th=[24249], 20.00th=[24773], 00:38:18.931 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25560], 60.00th=[25822], 00:38:18.931 | 70.00th=[25822], 80.00th=[26084], 90.00th=[28181], 95.00th=[33817], 00:38:18.931 | 99.00th=[40633], 99.50th=[43779], 99.90th=[48497], 99.95th=[50070], 00:38:18.931 | 99.99th=[50070] 00:38:18.931 bw ( KiB/s): min= 2336, max= 2616, per=4.13%, avg=2452.63, stdev=77.26, samples=19 00:38:18.931 iops : min= 584, max= 654, avg=613.16, stdev=19.31, samples=19 00:38:18.931 lat (msec) : 20=4.07%, 50=95.87%, 100=0.07% 00:38:18.931 cpu : usr=97.19%, sys=2.46%, ctx=19, majf=0, minf=36 00:38:18.931 IO depths : 1=1.7%, 2=3.4%, 4=10.4%, 8=71.2%, 16=13.3%, 32=0.0%, >=64=0.0% 00:38:18.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.931 complete : 0=0.0%, 4=91.1%, 8=5.6%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.931 issued rwts: total=6145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.932 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:18.932 filename1: (groupid=0, jobs=1): err= 0: pid=537101: Thu Jul 25 14:06:13 2024 00:38:18.932 read: IOPS=626, BW=2505KiB/s (2565kB/s)(24.5MiB/10020msec) 00:38:18.932 slat (nsec): min=4673, max=86498, avg=24614.54, stdev=12457.51 00:38:18.932 clat (usec): min=10453, max=43368, avg=25351.95, stdev=1903.91 00:38:18.932 lat (usec): min=10472, max=43382, avg=25376.56, stdev=1904.07 00:38:18.932 clat percentiles (usec): 00:38:18.932 | 1.00th=[18220], 5.00th=[23987], 10.00th=[24511], 20.00th=[24773], 00:38:18.932 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:38:18.932 | 70.00th=[25822], 80.00th=[25822], 90.00th=[26084], 95.00th=[26608], 00:38:18.932 | 99.00th=[31851], 99.50th=[33817], 99.90th=[38536], 99.95th=[43254], 00:38:18.932 | 99.99th=[43254] 00:38:18.932 bw ( KiB/s): min= 2432, max= 2784, per=4.22%, avg=2503.20, stdev=92.35, samples=20 00:38:18.932 iops : min= 608, max= 696, avg=625.80, stdev=23.09, samples=20 00:38:18.932 lat (msec) : 20=1.74%, 50=98.26% 00:38:18.932 cpu : usr=97.03%, sys=2.62%, ctx=20, majf=0, minf=41 00:38:18.932 IO depths : 1=5.3%, 2=10.7%, 4=22.7%, 8=53.9%, 16=7.4%, 32=0.0%, >=64=0.0% 00:38:18.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.932 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.932 issued rwts: total=6274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.932 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:18.932 filename1: (groupid=0, jobs=1): err= 0: pid=537102: Thu Jul 25 14:06:13 2024 00:38:18.932 read: IOPS=621, BW=2486KiB/s (2545kB/s)(24.3MiB/10016msec) 00:38:18.932 slat (nsec): min=4923, max=80918, avg=27605.62, stdev=13312.97 00:38:18.932 clat (usec): min=12525, max=46941, avg=25520.76, stdev=2156.50 00:38:18.932 lat (usec): min=12539, max=46955, avg=25548.37, stdev=2156.62 00:38:18.932 clat percentiles (usec): 00:38:18.932 | 1.00th=[18482], 5.00th=[23987], 10.00th=[24511], 20.00th=[25035], 00:38:18.932 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25560], 60.00th=[25560], 00:38:18.932 | 70.00th=[25822], 80.00th=[26084], 90.00th=[26346], 95.00th=[26870], 00:38:18.932 | 99.00th=[35914], 99.50th=[39584], 99.90th=[41681], 99.95th=[46924], 00:38:18.932 | 99.99th=[46924] 00:38:18.932 bw ( KiB/s): min= 2352, max= 2560, per=4.18%, avg=2483.20, stdev=63.92, samples=20 00:38:18.932 iops : min= 588, max= 640, avg=620.80, stdev=15.98, samples=20 00:38:18.932 lat (msec) : 20=1.78%, 50=98.22% 00:38:18.932 cpu : usr=97.49%, sys=2.15%, ctx=19, majf=0, minf=32 00:38:18.932 IO depths : 1=4.0%, 2=8.4%, 4=20.6%, 8=58.3%, 16=8.6%, 32=0.0%, >=64=0.0% 00:38:18.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.932 complete : 0=0.0%, 4=93.2%, 8=1.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.932 issued rwts: total=6224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.932 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:18.932 filename2: (groupid=0, jobs=1): err= 0: pid=537103: Thu Jul 25 14:06:13 2024 00:38:18.932 read: IOPS=630, BW=2522KiB/s (2583kB/s)(24.6MiB/10007msec) 00:38:18.932 slat (usec): min=6, max=102, avg=36.36, stdev=14.37 00:38:18.932 clat (usec): min=2661, max=29615, avg=25055.00, stdev=2094.69 00:38:18.932 lat (usec): min=2685, max=29623, avg=25091.36, stdev=2097.02 00:38:18.932 clat percentiles (usec): 00:38:18.932 | 1.00th=[15270], 5.00th=[23987], 10.00th=[24511], 20.00th=[24773], 00:38:18.932 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:38:18.932 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26084], 95.00th=[26346], 00:38:18.932 | 99.00th=[27132], 99.50th=[27395], 99.90th=[27657], 99.95th=[29492], 00:38:18.932 | 99.99th=[29492] 00:38:18.932 bw ( KiB/s): min= 2432, max= 2992, per=4.25%, avg=2522.11, stdev=130.55, samples=19 00:38:18.932 iops : min= 608, max= 748, avg=630.53, stdev=32.64, samples=19 00:38:18.932 lat (msec) : 4=0.51%, 10=0.14%, 20=1.11%, 50=98.24% 00:38:18.932 cpu : usr=98.64%, sys=1.04%, ctx=18, majf=0, minf=55 00:38:18.932 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:18.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.932 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.932 issued rwts: total=6310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.932 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:18.932 filename2: (groupid=0, jobs=1): err= 0: pid=537104: Thu Jul 25 14:06:13 2024 00:38:18.932 read: IOPS=651, BW=2606KiB/s (2668kB/s)(25.5MiB/10018msec) 00:38:18.932 slat (nsec): min=2971, max=82558, avg=13324.26, stdev=8530.24 00:38:18.932 clat (usec): min=2144, max=47385, avg=24470.38, stdev=4587.97 00:38:18.932 lat (usec): min=2150, max=47393, avg=24483.70, stdev=4589.14 00:38:18.932 clat percentiles (usec): 00:38:18.932 | 1.00th=[ 6915], 5.00th=[14615], 10.00th=[20317], 20.00th=[24511], 00:38:18.932 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:38:18.932 | 70.00th=[25822], 80.00th=[25822], 90.00th=[26346], 95.00th=[26870], 00:38:18.932 | 99.00th=[39060], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:38:18.932 | 99.99th=[47449] 00:38:18.932 bw ( KiB/s): min= 2432, max= 2976, per=4.38%, avg=2604.00, stdev=149.54, samples=20 00:38:18.932 iops : min= 608, max= 744, avg=651.00, stdev=37.39, samples=20 00:38:18.932 lat (msec) : 4=0.46%, 10=2.11%, 20=6.96%, 50=90.47% 00:38:18.932 cpu : usr=97.24%, sys=2.39%, ctx=22, majf=0, minf=52 00:38:18.932 IO depths : 1=2.6%, 2=5.4%, 4=16.4%, 8=65.1%, 16=10.6%, 32=0.0%, >=64=0.0% 00:38:18.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.932 complete : 0=0.0%, 4=92.4%, 8=2.3%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.932 issued rwts: total=6526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.932 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:18.932 filename2: (groupid=0, jobs=1): err= 0: pid=537105: Thu Jul 25 14:06:13 2024 00:38:18.932 read: IOPS=623, BW=2493KiB/s (2553kB/s)(24.4MiB/10004msec) 00:38:18.932 slat (nsec): min=6174, max=83314, avg=29125.02, stdev=12113.96 00:38:18.932 clat (usec): min=10847, max=50533, avg=25400.20, stdev=1665.70 00:38:18.932 lat (usec): min=10864, max=50554, avg=25429.33, stdev=1665.79 00:38:18.932 clat percentiles (usec): 00:38:18.932 | 1.00th=[23462], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:38:18.932 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:38:18.932 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26084], 95.00th=[26346], 00:38:18.932 | 99.00th=[27657], 99.50th=[35914], 99.90th=[44303], 99.95th=[44303], 00:38:18.932 | 99.99th=[50594] 00:38:18.932 bw ( KiB/s): min= 2432, max= 2560, per=4.19%, avg=2490.95, stdev=64.23, samples=19 00:38:18.932 iops : min= 608, max= 640, avg=622.74, stdev=16.06, samples=19 00:38:18.932 lat (msec) : 20=0.61%, 50=99.34%, 100=0.05% 00:38:18.932 cpu : usr=97.20%, sys=2.46%, ctx=23, majf=0, minf=36 00:38:18.932 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:18.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.932 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.932 issued rwts: total=6236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.932 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:18.932 filename2: (groupid=0, jobs=1): err= 0: pid=537106: Thu Jul 25 14:06:13 2024 00:38:18.932 read: IOPS=622, BW=2491KiB/s (2551kB/s)(24.4MiB/10020msec) 00:38:18.932 slat (nsec): min=4262, max=80032, avg=25658.94, stdev=11966.45 00:38:18.932 clat (usec): min=15855, max=49007, avg=25478.80, stdev=1655.55 00:38:18.932 lat (usec): min=15871, max=49020, avg=25504.45, stdev=1655.15 00:38:18.932 clat percentiles (usec): 00:38:18.932 | 1.00th=[19006], 5.00th=[24249], 10.00th=[24773], 20.00th=[25035], 00:38:18.932 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25560], 60.00th=[25560], 00:38:18.932 | 70.00th=[25822], 80.00th=[25822], 90.00th=[26084], 95.00th=[26608], 00:38:18.932 | 99.00th=[33162], 99.50th=[33817], 99.90th=[37487], 99.95th=[38011], 00:38:18.932 | 99.99th=[49021] 00:38:18.932 bw ( KiB/s): min= 2432, max= 2560, per=4.19%, avg=2489.60, stdev=63.87, samples=20 00:38:18.932 iops : min= 608, max= 640, avg=622.40, stdev=15.97, samples=20 00:38:18.932 lat (msec) : 20=1.14%, 50=98.86% 00:38:18.932 cpu : usr=97.35%, sys=2.31%, ctx=20, majf=0, minf=29 00:38:18.932 IO depths : 1=5.3%, 2=10.6%, 4=23.1%, 8=53.7%, 16=7.3%, 32=0.0%, >=64=0.0% 00:38:18.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.932 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.932 issued rwts: total=6240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.932 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:18.932 filename2: (groupid=0, jobs=1): err= 0: pid=537107: Thu Jul 25 14:06:13 2024 00:38:18.932 read: IOPS=622, BW=2491KiB/s (2551kB/s)(24.4MiB/10016msec) 00:38:18.932 slat (nsec): min=5442, max=78882, avg=27281.30, stdev=13562.16 00:38:18.932 clat (usec): min=5689, max=49079, avg=25465.12, stdev=3283.62 00:38:18.932 lat (usec): min=5696, max=49092, avg=25492.40, stdev=3284.67 00:38:18.932 clat percentiles (usec): 00:38:18.932 | 1.00th=[13435], 5.00th=[23462], 10.00th=[24511], 20.00th=[24773], 00:38:18.932 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:38:18.932 | 70.00th=[25822], 80.00th=[26084], 90.00th=[26346], 95.00th=[27132], 00:38:18.932 | 99.00th=[40109], 99.50th=[42730], 99.90th=[46924], 99.95th=[49021], 00:38:18.932 | 99.99th=[49021] 00:38:18.932 bw ( KiB/s): min= 2384, max= 2560, per=4.19%, avg=2488.40, stdev=55.76, samples=20 00:38:18.932 iops : min= 596, max= 640, avg=622.20, stdev=14.07, samples=20 00:38:18.932 lat (msec) : 10=0.32%, 20=3.77%, 50=95.91% 00:38:18.932 cpu : usr=97.17%, sys=2.46%, ctx=23, majf=0, minf=28 00:38:18.932 IO depths : 1=4.6%, 2=9.4%, 4=21.2%, 8=56.7%, 16=8.1%, 32=0.0%, >=64=0.0% 00:38:18.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.932 complete : 0=0.0%, 4=93.3%, 8=1.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.932 issued rwts: total=6238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.932 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:18.932 filename2: (groupid=0, jobs=1): err= 0: pid=537108: Thu Jul 25 14:06:13 2024 00:38:18.932 read: IOPS=598, BW=2394KiB/s (2451kB/s)(23.4MiB/10004msec) 00:38:18.933 slat (nsec): min=4892, max=66261, avg=19144.83, stdev=12072.28 00:38:18.933 clat (usec): min=4426, max=60262, avg=26636.59, stdev=4834.44 00:38:18.933 lat (usec): min=4438, max=60280, avg=26655.74, stdev=4833.29 00:38:18.933 clat percentiles (usec): 00:38:18.933 | 1.00th=[11207], 5.00th=[23725], 10.00th=[24511], 20.00th=[25035], 00:38:18.933 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25560], 60.00th=[25822], 00:38:18.933 | 70.00th=[26084], 80.00th=[26608], 90.00th=[32637], 95.00th=[36439], 00:38:18.933 | 99.00th=[44827], 99.50th=[47449], 99.90th=[50070], 99.95th=[60031], 00:38:18.933 | 99.99th=[60031] 00:38:18.933 bw ( KiB/s): min= 2096, max= 2528, per=4.01%, avg=2379.37, stdev=109.28, samples=19 00:38:18.933 iops : min= 524, max= 632, avg=594.84, stdev=27.32, samples=19 00:38:18.933 lat (msec) : 10=0.94%, 20=1.94%, 50=96.98%, 100=0.15% 00:38:18.933 cpu : usr=97.37%, sys=2.27%, ctx=27, majf=0, minf=35 00:38:18.933 IO depths : 1=0.5%, 2=1.2%, 4=8.8%, 8=74.8%, 16=14.7%, 32=0.0%, >=64=0.0% 00:38:18.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.933 complete : 0=0.0%, 4=91.0%, 8=5.9%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.933 issued rwts: total=5987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.933 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:18.933 filename2: (groupid=0, jobs=1): err= 0: pid=537109: Thu Jul 25 14:06:13 2024 00:38:18.933 read: IOPS=568, BW=2272KiB/s (2327kB/s)(22.2MiB/10004msec) 00:38:18.933 slat (nsec): min=6204, max=66696, avg=18685.78, stdev=11982.40 00:38:18.933 clat (usec): min=4618, max=70511, avg=28058.46, stdev=5626.59 00:38:18.933 lat (usec): min=4625, max=70530, avg=28077.14, stdev=5625.03 00:38:18.933 clat percentiles (usec): 00:38:18.933 | 1.00th=[15139], 5.00th=[22676], 10.00th=[24773], 20.00th=[25297], 00:38:18.933 | 30.00th=[25560], 40.00th=[25560], 50.00th=[25822], 60.00th=[26346], 00:38:18.933 | 70.00th=[28443], 80.00th=[32113], 90.00th=[36439], 95.00th=[39060], 00:38:18.933 | 99.00th=[43779], 99.50th=[44827], 99.90th=[54789], 99.95th=[70779], 00:38:18.933 | 99.99th=[70779] 00:38:18.933 bw ( KiB/s): min= 1920, max= 2480, per=3.80%, avg=2257.00, stdev=203.41, samples=19 00:38:18.933 iops : min= 480, max= 620, avg=564.21, stdev=50.92, samples=19 00:38:18.933 lat (msec) : 10=0.48%, 20=2.96%, 50=96.29%, 100=0.28% 00:38:18.933 cpu : usr=96.93%, sys=2.71%, ctx=21, majf=0, minf=45 00:38:18.933 IO depths : 1=0.4%, 2=0.9%, 4=10.8%, 8=74.1%, 16=13.8%, 32=0.0%, >=64=0.0% 00:38:18.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.933 complete : 0=0.0%, 4=91.3%, 8=4.7%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.933 issued rwts: total=5683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.933 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:18.933 filename2: (groupid=0, jobs=1): err= 0: pid=537110: Thu Jul 25 14:06:13 2024 00:38:18.933 read: IOPS=604, BW=2419KiB/s (2477kB/s)(23.6MiB/10005msec) 00:38:18.933 slat (nsec): min=5162, max=81602, avg=19654.62, stdev=13430.51 00:38:18.933 clat (usec): min=5335, max=48861, avg=26360.43, stdev=4454.81 00:38:18.933 lat (usec): min=5343, max=48882, avg=26380.09, stdev=4454.17 00:38:18.933 clat percentiles (usec): 00:38:18.933 | 1.00th=[13435], 5.00th=[23462], 10.00th=[24511], 20.00th=[25035], 00:38:18.933 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25560], 60.00th=[25822], 00:38:18.933 | 70.00th=[26084], 80.00th=[26346], 90.00th=[30802], 95.00th=[35914], 00:38:18.933 | 99.00th=[44303], 99.50th=[46400], 99.90th=[47449], 99.95th=[49021], 00:38:18.933 | 99.99th=[49021] 00:38:18.933 bw ( KiB/s): min= 2260, max= 2576, per=4.06%, avg=2411.58, stdev=85.94, samples=19 00:38:18.933 iops : min= 565, max= 644, avg=602.89, stdev=21.49, samples=19 00:38:18.933 lat (msec) : 10=0.55%, 20=2.93%, 50=96.53% 00:38:18.933 cpu : usr=97.21%, sys=2.42%, ctx=15, majf=0, minf=36 00:38:18.933 IO depths : 1=0.2%, 2=0.4%, 4=5.7%, 8=78.1%, 16=15.6%, 32=0.0%, >=64=0.0% 00:38:18.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.933 complete : 0=0.0%, 4=90.2%, 8=7.2%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.933 issued rwts: total=6051,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.933 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:18.933 00:38:18.933 Run status group 0 (all jobs): 00:38:18.933 READ: bw=58.0MiB/s (60.8MB/s), 2272KiB/s-2606KiB/s (2327kB/s-2668kB/s), io=581MiB (609MB), run=10001-10023msec 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:18.933 bdev_null0 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:18.933 [2024-07-25 14:06:14.158538] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:18.933 bdev_null1 00:38:18.933 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:18.934 { 00:38:18.934 "params": { 00:38:18.934 "name": "Nvme$subsystem", 00:38:18.934 "trtype": "$TEST_TRANSPORT", 00:38:18.934 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:18.934 "adrfam": "ipv4", 00:38:18.934 "trsvcid": "$NVMF_PORT", 00:38:18.934 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:18.934 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:18.934 "hdgst": ${hdgst:-false}, 00:38:18.934 "ddgst": ${ddgst:-false} 00:38:18.934 }, 00:38:18.934 "method": "bdev_nvme_attach_controller" 00:38:18.934 } 00:38:18.934 EOF 00:38:18.934 )") 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:18.934 { 00:38:18.934 "params": { 00:38:18.934 "name": "Nvme$subsystem", 00:38:18.934 "trtype": "$TEST_TRANSPORT", 00:38:18.934 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:18.934 "adrfam": "ipv4", 00:38:18.934 "trsvcid": "$NVMF_PORT", 00:38:18.934 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:18.934 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:18.934 "hdgst": ${hdgst:-false}, 00:38:18.934 "ddgst": ${ddgst:-false} 00:38:18.934 }, 00:38:18.934 "method": "bdev_nvme_attach_controller" 00:38:18.934 } 00:38:18.934 EOF 00:38:18.934 )") 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:18.934 "params": { 00:38:18.934 "name": "Nvme0", 00:38:18.934 "trtype": "tcp", 00:38:18.934 "traddr": "10.0.0.2", 00:38:18.934 "adrfam": "ipv4", 00:38:18.934 "trsvcid": "4420", 00:38:18.934 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:18.934 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:18.934 "hdgst": false, 00:38:18.934 "ddgst": false 00:38:18.934 }, 00:38:18.934 "method": "bdev_nvme_attach_controller" 00:38:18.934 },{ 00:38:18.934 "params": { 00:38:18.934 "name": "Nvme1", 00:38:18.934 "trtype": "tcp", 00:38:18.934 "traddr": "10.0.0.2", 00:38:18.934 "adrfam": "ipv4", 00:38:18.934 "trsvcid": "4420", 00:38:18.934 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:18.934 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:18.934 "hdgst": false, 00:38:18.934 "ddgst": false 00:38:18.934 }, 00:38:18.934 "method": "bdev_nvme_attach_controller" 00:38:18.934 }' 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:18.934 14:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:18.934 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:18.934 ... 00:38:18.934 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:18.934 ... 00:38:18.934 fio-3.35 00:38:18.934 Starting 4 threads 00:38:18.934 EAL: No free 2048 kB hugepages reported on node 1 00:38:24.210 00:38:24.210 filename0: (groupid=0, jobs=1): err= 0: pid=539496: Thu Jul 25 14:06:20 2024 00:38:24.210 read: IOPS=2793, BW=21.8MiB/s (22.9MB/s)(109MiB/5001msec) 00:38:24.210 slat (usec): min=5, max=194, avg= 9.59, stdev= 4.05 00:38:24.210 clat (usec): min=1558, max=45350, avg=2837.88, stdev=1118.94 00:38:24.210 lat (usec): min=1564, max=45372, avg=2847.47, stdev=1118.88 00:38:24.210 clat percentiles (usec): 00:38:24.210 | 1.00th=[ 1958], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2442], 00:38:24.210 | 30.00th=[ 2540], 40.00th=[ 2638], 50.00th=[ 2737], 60.00th=[ 2835], 00:38:24.210 | 70.00th=[ 2966], 80.00th=[ 3195], 90.00th=[ 3490], 95.00th=[ 3720], 00:38:24.210 | 99.00th=[ 4080], 99.50th=[ 4228], 99.90th=[ 4621], 99.95th=[45351], 00:38:24.210 | 99.99th=[45351] 00:38:24.210 bw ( KiB/s): min=21002, max=23296, per=25.24%, avg=22427.78, stdev=651.90, samples=9 00:38:24.210 iops : min= 2625, max= 2912, avg=2803.44, stdev=81.56, samples=9 00:38:24.210 lat (msec) : 2=1.48%, 4=96.49%, 10=1.97%, 50=0.06% 00:38:24.210 cpu : usr=91.18%, sys=7.14%, ctx=352, majf=0, minf=9 00:38:24.210 IO depths : 1=0.2%, 2=1.5%, 4=68.3%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:24.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.210 complete : 0=0.0%, 4=94.5%, 8=5.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.210 issued rwts: total=13970,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.210 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:24.210 filename0: (groupid=0, jobs=1): err= 0: pid=539497: Thu Jul 25 14:06:20 2024 00:38:24.210 read: IOPS=2744, BW=21.4MiB/s (22.5MB/s)(107MiB/5001msec) 00:38:24.210 slat (nsec): min=5805, max=50698, avg=9018.32, stdev=3479.32 00:38:24.210 clat (usec): min=1417, max=5203, avg=2891.00, stdev=444.83 00:38:24.210 lat (usec): min=1423, max=5234, avg=2900.02, stdev=444.99 00:38:24.210 clat percentiles (usec): 00:38:24.210 | 1.00th=[ 1942], 5.00th=[ 2180], 10.00th=[ 2343], 20.00th=[ 2540], 00:38:24.210 | 30.00th=[ 2671], 40.00th=[ 2769], 50.00th=[ 2868], 60.00th=[ 2966], 00:38:24.210 | 70.00th=[ 3097], 80.00th=[ 3195], 90.00th=[ 3458], 95.00th=[ 3687], 00:38:24.210 | 99.00th=[ 4113], 99.50th=[ 4293], 99.90th=[ 4621], 99.95th=[ 4752], 00:38:24.210 | 99.99th=[ 4948] 00:38:24.210 bw ( KiB/s): min=20480, max=23422, per=24.71%, avg=21950.00, stdev=869.31, samples=9 00:38:24.210 iops : min= 2560, max= 2927, avg=2743.67, stdev=108.51, samples=9 00:38:24.210 lat (msec) : 2=1.61%, 4=96.81%, 10=1.58% 00:38:24.210 cpu : usr=93.10%, sys=6.22%, ctx=233, majf=0, minf=9 00:38:24.210 IO depths : 1=0.2%, 2=2.0%, 4=66.8%, 8=31.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:24.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.210 complete : 0=0.0%, 4=95.2%, 8=4.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.210 issued rwts: total=13727,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.210 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:24.210 filename1: (groupid=0, jobs=1): err= 0: pid=539498: Thu Jul 25 14:06:20 2024 00:38:24.210 read: IOPS=2804, BW=21.9MiB/s (23.0MB/s)(110MiB/5002msec) 00:38:24.210 slat (usec): min=5, max=140, avg= 9.02, stdev= 3.50 00:38:24.210 clat (usec): min=1501, max=4839, avg=2828.24, stdev=475.25 00:38:24.210 lat (usec): min=1507, max=4845, avg=2837.26, stdev=475.18 00:38:24.210 clat percentiles (usec): 00:38:24.210 | 1.00th=[ 1926], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2442], 00:38:24.210 | 30.00th=[ 2540], 40.00th=[ 2638], 50.00th=[ 2737], 60.00th=[ 2868], 00:38:24.210 | 70.00th=[ 2999], 80.00th=[ 3195], 90.00th=[ 3523], 95.00th=[ 3752], 00:38:24.210 | 99.00th=[ 4113], 99.50th=[ 4228], 99.90th=[ 4555], 99.95th=[ 4686], 00:38:24.210 | 99.99th=[ 4817] 00:38:24.210 bw ( KiB/s): min=21504, max=23568, per=25.29%, avg=22465.78, stdev=653.68, samples=9 00:38:24.210 iops : min= 2688, max= 2946, avg=2808.22, stdev=81.71, samples=9 00:38:24.210 lat (msec) : 2=1.55%, 4=96.41%, 10=2.05% 00:38:24.210 cpu : usr=93.50%, sys=5.76%, ctx=267, majf=0, minf=9 00:38:24.210 IO depths : 1=0.2%, 2=1.7%, 4=68.1%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:24.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.210 complete : 0=0.0%, 4=94.5%, 8=5.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.210 issued rwts: total=14030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.210 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:24.210 filename1: (groupid=0, jobs=1): err= 0: pid=539499: Thu Jul 25 14:06:20 2024 00:38:24.210 read: IOPS=2763, BW=21.6MiB/s (22.6MB/s)(108MiB/5001msec) 00:38:24.210 slat (nsec): min=5783, max=90703, avg=9173.83, stdev=3342.25 00:38:24.210 clat (usec): min=1455, max=4714, avg=2871.03, stdev=421.20 00:38:24.210 lat (usec): min=1467, max=4720, avg=2880.21, stdev=421.22 00:38:24.210 clat percentiles (usec): 00:38:24.210 | 1.00th=[ 1975], 5.00th=[ 2180], 10.00th=[ 2343], 20.00th=[ 2507], 00:38:24.210 | 30.00th=[ 2638], 40.00th=[ 2769], 50.00th=[ 2868], 60.00th=[ 2966], 00:38:24.210 | 70.00th=[ 3097], 80.00th=[ 3163], 90.00th=[ 3392], 95.00th=[ 3589], 00:38:24.210 | 99.00th=[ 4015], 99.50th=[ 4146], 99.90th=[ 4359], 99.95th=[ 4490], 00:38:24.210 | 99.99th=[ 4555] 00:38:24.210 bw ( KiB/s): min=20672, max=23568, per=24.80%, avg=22035.56, stdev=841.06, samples=9 00:38:24.210 iops : min= 2584, max= 2946, avg=2754.44, stdev=105.13, samples=9 00:38:24.210 lat (msec) : 2=1.43%, 4=97.53%, 10=1.03% 00:38:24.210 cpu : usr=93.70%, sys=6.00%, ctx=6, majf=0, minf=9 00:38:24.210 IO depths : 1=0.2%, 2=2.1%, 4=66.1%, 8=31.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:24.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.210 complete : 0=0.0%, 4=95.6%, 8=4.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.210 issued rwts: total=13822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.210 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:24.210 00:38:24.210 Run status group 0 (all jobs): 00:38:24.210 READ: bw=86.8MiB/s (91.0MB/s), 21.4MiB/s-21.9MiB/s (22.5MB/s-23.0MB/s), io=434MiB (455MB), run=5001-5002msec 00:38:24.210 14:06:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:38:24.210 14:06:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:24.210 14:06:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:24.210 14:06:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:24.210 14:06:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:24.210 14:06:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:24.210 14:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.210 14:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.210 14:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.210 14:06:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:24.210 14:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.210 14:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.210 14:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.210 14:06:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:24.210 14:06:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:24.210 14:06:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:24.210 14:06:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:24.210 14:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.210 14:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.210 14:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.210 14:06:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:24.210 14:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.210 14:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.210 14:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.210 00:38:24.210 real 0m24.190s 00:38:24.210 user 4m54.692s 00:38:24.210 sys 0m9.329s 00:38:24.210 14:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:24.211 14:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.211 ************************************ 00:38:24.211 END TEST fio_dif_rand_params 00:38:24.211 ************************************ 00:38:24.211 14:06:20 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:38:24.211 14:06:20 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:24.211 14:06:20 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:24.211 14:06:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:24.211 ************************************ 00:38:24.211 START TEST fio_dif_digest 00:38:24.211 ************************************ 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:24.211 bdev_null0 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:24.211 [2024-07-25 14:06:20.556949] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:24.211 { 00:38:24.211 "params": { 00:38:24.211 "name": "Nvme$subsystem", 00:38:24.211 "trtype": "$TEST_TRANSPORT", 00:38:24.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:24.211 "adrfam": "ipv4", 00:38:24.211 "trsvcid": "$NVMF_PORT", 00:38:24.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:24.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:24.211 "hdgst": ${hdgst:-false}, 00:38:24.211 "ddgst": ${ddgst:-false} 00:38:24.211 }, 00:38:24.211 "method": "bdev_nvme_attach_controller" 00:38:24.211 } 00:38:24.211 EOF 00:38:24.211 )") 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:24.211 "params": { 00:38:24.211 "name": "Nvme0", 00:38:24.211 "trtype": "tcp", 00:38:24.211 "traddr": "10.0.0.2", 00:38:24.211 "adrfam": "ipv4", 00:38:24.211 "trsvcid": "4420", 00:38:24.211 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:24.211 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:24.211 "hdgst": true, 00:38:24.211 "ddgst": true 00:38:24.211 }, 00:38:24.211 "method": "bdev_nvme_attach_controller" 00:38:24.211 }' 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:24.211 14:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:24.211 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:24.211 ... 00:38:24.211 fio-3.35 00:38:24.211 Starting 3 threads 00:38:24.211 EAL: No free 2048 kB hugepages reported on node 1 00:38:36.426 00:38:36.426 filename0: (groupid=0, jobs=1): err= 0: pid=540704: Thu Jul 25 14:06:31 2024 00:38:36.426 read: IOPS=295, BW=36.9MiB/s (38.7MB/s)(371MiB/10048msec) 00:38:36.426 slat (nsec): min=6072, max=32371, avg=11040.77, stdev=2030.56 00:38:36.426 clat (usec): min=6114, max=93262, avg=10131.30, stdev=3403.94 00:38:36.426 lat (usec): min=6125, max=93273, avg=10142.34, stdev=3404.06 00:38:36.426 clat percentiles (usec): 00:38:36.426 | 1.00th=[ 6849], 5.00th=[ 7373], 10.00th=[ 7701], 20.00th=[ 8356], 00:38:36.426 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[10552], 00:38:36.426 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11600], 95.00th=[11863], 00:38:36.426 | 99.00th=[12780], 99.50th=[13304], 99.90th=[53740], 99.95th=[53740], 00:38:36.426 | 99.99th=[92799] 00:38:36.426 bw ( KiB/s): min=34304, max=41472, per=36.89%, avg=37952.00, stdev=1993.16, samples=20 00:38:36.426 iops : min= 268, max= 324, avg=296.50, stdev=15.57, samples=20 00:38:36.426 lat (msec) : 10=41.25%, 20=58.31%, 50=0.03%, 100=0.40% 00:38:36.426 cpu : usr=91.29%, sys=8.38%, ctx=18, majf=0, minf=102 00:38:36.426 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:36.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.426 issued rwts: total=2967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.426 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:36.426 filename0: (groupid=0, jobs=1): err= 0: pid=540705: Thu Jul 25 14:06:31 2024 00:38:36.426 read: IOPS=212, BW=26.6MiB/s (27.9MB/s)(267MiB/10046msec) 00:38:36.426 slat (usec): min=6, max=100, avg=11.41, stdev= 2.74 00:38:36.426 clat (usec): min=6741, max=95533, avg=14054.63, stdev=11372.51 00:38:36.426 lat (usec): min=6752, max=95545, avg=14066.04, stdev=11372.54 00:38:36.426 clat percentiles (usec): 00:38:36.426 | 1.00th=[ 7898], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10421], 00:38:36.426 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:38:36.426 | 70.00th=[11600], 80.00th=[11994], 90.00th=[12780], 95.00th=[52167], 00:38:36.426 | 99.00th=[53740], 99.50th=[56886], 99.90th=[93848], 99.95th=[94897], 00:38:36.426 | 99.99th=[95945] 00:38:36.426 bw ( KiB/s): min=19200, max=35584, per=26.59%, avg=27353.60, stdev=4237.78, samples=20 00:38:36.426 iops : min= 150, max= 278, avg=213.70, stdev=33.11, samples=20 00:38:36.426 lat (msec) : 10=10.66%, 20=82.42%, 50=0.09%, 100=6.83% 00:38:36.426 cpu : usr=92.16%, sys=7.55%, ctx=20, majf=0, minf=182 00:38:36.426 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:36.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.426 issued rwts: total=2139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.426 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:36.426 filename0: (groupid=0, jobs=1): err= 0: pid=540706: Thu Jul 25 14:06:31 2024 00:38:36.426 read: IOPS=295, BW=36.9MiB/s (38.7MB/s)(371MiB/10046msec) 00:38:36.426 slat (nsec): min=6102, max=28217, avg=10933.78, stdev=2076.87 00:38:36.426 clat (usec): min=5338, max=52000, avg=10123.52, stdev=2222.78 00:38:36.426 lat (usec): min=5345, max=52011, avg=10134.46, stdev=2223.00 00:38:36.426 clat percentiles (usec): 00:38:36.426 | 1.00th=[ 6849], 5.00th=[ 7439], 10.00th=[ 7767], 20.00th=[ 8455], 00:38:36.426 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[10421], 60.00th=[10683], 00:38:36.426 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11731], 95.00th=[11994], 00:38:36.426 | 99.00th=[12911], 99.50th=[13173], 99.90th=[51119], 99.95th=[51643], 00:38:36.426 | 99.99th=[52167] 00:38:36.426 bw ( KiB/s): min=34560, max=42240, per=36.92%, avg=37977.60, stdev=1932.36, samples=20 00:38:36.426 iops : min= 270, max= 330, avg=296.70, stdev=15.10, samples=20 00:38:36.426 lat (msec) : 10=39.00%, 20=60.83%, 50=0.03%, 100=0.13% 00:38:36.426 cpu : usr=91.45%, sys=8.23%, ctx=22, majf=0, minf=154 00:38:36.426 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:36.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.426 issued rwts: total=2969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.426 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:36.426 00:38:36.426 Run status group 0 (all jobs): 00:38:36.426 READ: bw=100MiB/s (105MB/s), 26.6MiB/s-36.9MiB/s (27.9MB/s-38.7MB/s), io=1009MiB (1058MB), run=10046-10048msec 00:38:36.426 14:06:31 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:38:36.426 14:06:31 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:38:36.426 14:06:31 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:38:36.426 14:06:31 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:36.426 14:06:31 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:38:36.426 14:06:31 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:36.426 14:06:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.426 14:06:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:36.426 14:06:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.426 14:06:31 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:36.426 14:06:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.426 14:06:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:36.426 14:06:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.426 00:38:36.426 real 0m11.154s 00:38:36.426 user 0m36.314s 00:38:36.426 sys 0m2.869s 00:38:36.426 14:06:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:36.426 14:06:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:36.426 ************************************ 00:38:36.426 END TEST fio_dif_digest 00:38:36.426 ************************************ 00:38:36.426 14:06:31 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:38:36.426 14:06:31 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:38:36.426 14:06:31 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:36.426 14:06:31 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:38:36.426 14:06:31 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:36.426 14:06:31 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:38:36.426 14:06:31 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:36.426 14:06:31 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:36.426 rmmod nvme_tcp 00:38:36.426 rmmod nvme_fabrics 00:38:36.426 rmmod nvme_keyring 00:38:36.426 14:06:31 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:36.426 14:06:31 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:38:36.426 14:06:31 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:38:36.426 14:06:31 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 531347 ']' 00:38:36.426 14:06:31 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 531347 00:38:36.426 14:06:31 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 531347 ']' 00:38:36.426 14:06:31 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 531347 00:38:36.426 14:06:31 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:38:36.426 14:06:31 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:36.426 14:06:31 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 531347 00:38:36.426 14:06:31 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:36.426 14:06:31 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:36.426 14:06:31 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 531347' 00:38:36.426 killing process with pid 531347 00:38:36.426 14:06:31 nvmf_dif -- common/autotest_common.sh@969 -- # kill 531347 00:38:36.426 14:06:31 nvmf_dif -- common/autotest_common.sh@974 -- # wait 531347 00:38:36.427 14:06:32 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:38:36.427 14:06:32 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:38.341 Waiting for block devices as requested 00:38:38.341 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:38.601 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:38.601 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:38.601 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:38.859 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:38.859 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:38.859 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:38.859 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:39.119 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:39.119 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:39.119 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:39.378 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:39.378 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:39.378 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:39.637 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:39.637 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:39.637 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:38:39.896 14:06:36 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:39.896 14:06:36 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:39.897 14:06:36 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:39.897 14:06:36 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:39.897 14:06:36 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:39.897 14:06:36 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:39.897 14:06:36 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:41.803 14:06:38 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:41.803 00:38:41.803 real 1m15.437s 00:38:41.803 user 7m14.480s 00:38:41.803 sys 0m29.727s 00:38:41.803 14:06:38 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:41.803 14:06:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:41.803 ************************************ 00:38:41.803 END TEST nvmf_dif 00:38:41.803 ************************************ 00:38:42.063 14:06:38 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:42.063 14:06:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:42.063 14:06:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:42.063 14:06:38 -- common/autotest_common.sh@10 -- # set +x 00:38:42.063 ************************************ 00:38:42.063 START TEST nvmf_abort_qd_sizes 00:38:42.063 ************************************ 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:42.063 * Looking for test storage... 00:38:42.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:38:42.063 14:06:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:48.638 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:48.638 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:38:48.638 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:48.638 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:48.638 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:48.638 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:48.638 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:48.638 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:38:48.638 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:48.638 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:38:48.638 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:38:48.638 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:38:48.638 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:38:48.638 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:38:48.638 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:38:48.638 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:48.638 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:48.638 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:48.638 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:48.638 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:48.638 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:48.638 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:48.639 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:48.639 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:48.639 Found net devices under 0000:af:00.0: cvl_0_0 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:48.639 Found net devices under 0000:af:00.1: cvl_0_1 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:48.639 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:48.899 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:48.899 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:48.899 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:48.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:48.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:38:48.899 00:38:48.899 --- 10.0.0.2 ping statistics --- 00:38:48.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:48.899 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:38:48.899 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:48.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:48.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:38:48.899 00:38:48.899 --- 10.0.0.1 ping statistics --- 00:38:48.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:48.899 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:38:48.899 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:48.899 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:38:48.899 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:38:48.899 14:06:45 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:52.186 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:52.186 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:52.186 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:52.186 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:52.186 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:52.186 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:52.186 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:52.186 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:52.186 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:52.186 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:52.186 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:52.186 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:52.186 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:52.186 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:52.186 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:52.186 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:53.647 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:38:53.647 14:06:50 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:53.647 14:06:50 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:53.647 14:06:50 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:53.647 14:06:50 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:53.647 14:06:50 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:53.647 14:06:50 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:53.647 14:06:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:38:53.647 14:06:50 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:53.647 14:06:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:53.647 14:06:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:53.647 14:06:50 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=548922 00:38:53.648 14:06:50 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 548922 00:38:53.648 14:06:50 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:38:53.648 14:06:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 548922 ']' 00:38:53.648 14:06:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:53.648 14:06:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:53.648 14:06:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:53.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:53.648 14:06:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:53.648 14:06:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:53.648 [2024-07-25 14:06:50.350894] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:38:53.648 [2024-07-25 14:06:50.350943] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:53.648 EAL: No free 2048 kB hugepages reported on node 1 00:38:53.648 [2024-07-25 14:06:50.393614] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:53.648 [2024-07-25 14:06:50.429139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:53.648 [2024-07-25 14:06:50.469559] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:53.648 [2024-07-25 14:06:50.469604] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:53.648 [2024-07-25 14:06:50.469613] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:53.648 [2024-07-25 14:06:50.469622] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:53.648 [2024-07-25 14:06:50.469629] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:53.648 [2024-07-25 14:06:50.469678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:53.648 [2024-07-25 14:06:50.469792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:53.648 [2024-07-25 14:06:50.469814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:38:53.648 [2024-07-25 14:06:50.469816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:54.587 14:06:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:54.587 14:06:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:38:54.587 14:06:51 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:54.587 14:06:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:54.587 14:06:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:54.587 14:06:51 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:54.587 14:06:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:38:54.587 14:06:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:38:54.587 14:06:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:38:54.587 14:06:51 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:38:54.587 14:06:51 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:38:54.587 14:06:51 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:d8:00.0 ]] 00:38:54.587 14:06:51 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:38:54.587 14:06:51 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:38:54.587 14:06:51 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:d8:00.0 ]] 00:38:54.587 14:06:51 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:38:54.587 14:06:51 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:38:54.587 14:06:51 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:38:54.587 14:06:51 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:38:54.587 14:06:51 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:d8:00.0 00:38:54.587 14:06:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:38:54.587 14:06:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:d8:00.0 00:38:54.587 14:06:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:38:54.587 14:06:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:54.587 14:06:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:54.587 14:06:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:54.587 ************************************ 00:38:54.587 START TEST spdk_target_abort 00:38:54.587 ************************************ 00:38:54.587 14:06:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:38:54.587 14:06:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:38:54.587 14:06:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:d8:00.0 -b spdk_target 00:38:54.587 14:06:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.587 14:06:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:57.877 spdk_targetn1 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:57.877 [2024-07-25 14:06:54.097356] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:57.877 [2024-07-25 14:06:54.133633] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:38:57.877 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:57.878 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:57.878 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:57.878 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:57.878 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:57.878 14:06:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:57.878 EAL: No free 2048 kB hugepages reported on node 1 00:39:01.169 Initializing NVMe Controllers 00:39:01.169 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:01.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:01.169 Initialization complete. Launching workers. 00:39:01.169 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9460, failed: 0 00:39:01.169 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1461, failed to submit 7999 00:39:01.169 success 856, unsuccess 605, failed 0 00:39:01.169 14:06:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:01.169 14:06:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:01.169 EAL: No free 2048 kB hugepages reported on node 1 00:39:04.459 Initializing NVMe Controllers 00:39:04.459 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:04.459 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:04.459 Initialization complete. Launching workers. 00:39:04.459 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8727, failed: 0 00:39:04.459 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1260, failed to submit 7467 00:39:04.459 success 365, unsuccess 895, failed 0 00:39:04.459 14:07:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:04.459 14:07:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:04.459 EAL: No free 2048 kB hugepages reported on node 1 00:39:06.995 Initializing NVMe Controllers 00:39:06.995 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:06.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:06.995 Initialization complete. Launching workers. 00:39:06.995 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38658, failed: 0 00:39:06.995 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2711, failed to submit 35947 00:39:06.995 success 596, unsuccess 2115, failed 0 00:39:06.995 14:07:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:39:06.995 14:07:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.995 14:07:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:07.253 14:07:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:07.253 14:07:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:39:07.253 14:07:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:07.253 14:07:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:09.158 14:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:09.158 14:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 548922 00:39:09.158 14:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 548922 ']' 00:39:09.158 14:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 548922 00:39:09.158 14:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:39:09.158 14:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:09.158 14:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 548922 00:39:09.158 14:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:09.158 14:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:09.158 14:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 548922' 00:39:09.158 killing process with pid 548922 00:39:09.158 14:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 548922 00:39:09.158 14:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 548922 00:39:09.158 00:39:09.158 real 0m14.783s 00:39:09.158 user 0m58.498s 00:39:09.158 sys 0m2.862s 00:39:09.158 14:07:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:09.158 14:07:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:09.158 ************************************ 00:39:09.158 END TEST spdk_target_abort 00:39:09.158 ************************************ 00:39:09.418 14:07:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:39:09.418 14:07:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:09.418 14:07:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:09.418 14:07:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:09.418 ************************************ 00:39:09.418 START TEST kernel_target_abort 00:39:09.418 ************************************ 00:39:09.418 14:07:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:39:09.418 14:07:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:39:09.418 14:07:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:39:09.418 14:07:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:09.418 14:07:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:09.418 14:07:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:09.418 14:07:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:09.418 14:07:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:09.418 14:07:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:09.418 14:07:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:09.418 14:07:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:09.418 14:07:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:09.418 14:07:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:39:09.418 14:07:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:39:09.418 14:07:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:39:09.418 14:07:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:09.418 14:07:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:09.418 14:07:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:39:09.418 14:07:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:39:09.418 14:07:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:39:09.418 14:07:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:39:09.418 14:07:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:39:09.418 14:07:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:11.953 Waiting for block devices as requested 00:39:11.953 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:12.212 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:12.212 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:12.212 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:12.503 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:12.503 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:12.503 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:12.503 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:12.761 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:12.761 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:12.761 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:13.019 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:13.019 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:13.019 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:13.277 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:13.277 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:13.277 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:39:13.536 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:39:13.536 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:39:13.536 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:39:13.536 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:39:13.536 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:13.536 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:39:13.536 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:39:13.536 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:39:13.536 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:39:13.536 No valid GPT data, bailing 00:39:13.536 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:39:13.536 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:39:13.536 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:39:13.536 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:39:13.536 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:39:13.536 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:13.536 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:13.536 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:39:13.536 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:39:13.536 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:39:13.536 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:39:13.536 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:39:13.536 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:39:13.536 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:39:13.536 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:39:13.536 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:39:13.536 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:39:13.537 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:39:13.537 00:39:13.537 Discovery Log Number of Records 2, Generation counter 2 00:39:13.537 =====Discovery Log Entry 0====== 00:39:13.537 trtype: tcp 00:39:13.537 adrfam: ipv4 00:39:13.537 subtype: current discovery subsystem 00:39:13.537 treq: not specified, sq flow control disable supported 00:39:13.537 portid: 1 00:39:13.537 trsvcid: 4420 00:39:13.537 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:39:13.537 traddr: 10.0.0.1 00:39:13.537 eflags: none 00:39:13.537 sectype: none 00:39:13.537 =====Discovery Log Entry 1====== 00:39:13.537 trtype: tcp 00:39:13.537 adrfam: ipv4 00:39:13.537 subtype: nvme subsystem 00:39:13.537 treq: not specified, sq flow control disable supported 00:39:13.537 portid: 1 00:39:13.537 trsvcid: 4420 00:39:13.537 subnqn: nqn.2016-06.io.spdk:testnqn 00:39:13.537 traddr: 10.0.0.1 00:39:13.537 eflags: none 00:39:13.537 sectype: none 00:39:13.537 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:39:13.537 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:13.537 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:13.537 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:39:13.537 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:13.537 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:13.537 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:13.537 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:13.537 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:13.537 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:13.537 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:13.537 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:13.537 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:13.537 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:13.537 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:39:13.537 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:13.537 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:39:13.537 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:13.537 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:13.537 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:13.537 14:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:13.537 EAL: No free 2048 kB hugepages reported on node 1 00:39:16.824 Initializing NVMe Controllers 00:39:16.824 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:16.824 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:16.824 Initialization complete. Launching workers. 00:39:16.824 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 72244, failed: 0 00:39:16.824 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 72244, failed to submit 0 00:39:16.824 success 0, unsuccess 72244, failed 0 00:39:16.824 14:07:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:16.824 14:07:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:16.824 EAL: No free 2048 kB hugepages reported on node 1 00:39:20.107 Initializing NVMe Controllers 00:39:20.107 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:20.107 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:20.107 Initialization complete. Launching workers. 00:39:20.107 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 125525, failed: 0 00:39:20.107 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31654, failed to submit 93871 00:39:20.107 success 0, unsuccess 31654, failed 0 00:39:20.107 14:07:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:20.107 14:07:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:20.107 EAL: No free 2048 kB hugepages reported on node 1 00:39:23.422 Initializing NVMe Controllers 00:39:23.422 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:23.422 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:23.422 Initialization complete. Launching workers. 00:39:23.422 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 120967, failed: 0 00:39:23.422 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30254, failed to submit 90713 00:39:23.422 success 0, unsuccess 30254, failed 0 00:39:23.422 14:07:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:39:23.422 14:07:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:39:23.422 14:07:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:39:23.422 14:07:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:23.422 14:07:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:23.422 14:07:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:39:23.422 14:07:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:23.422 14:07:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:39:23.422 14:07:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:39:23.422 14:07:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:26.718 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:39:26.719 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:39:26.719 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:39:26.719 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:39:26.719 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:39:26.719 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:39:26.719 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:39:26.719 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:39:26.719 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:39:26.719 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:39:26.719 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:39:26.719 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:39:26.719 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:39:26.719 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:39:26.719 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:39:26.719 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:39:28.098 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:39:28.098 00:39:28.098 real 0m18.622s 00:39:28.098 user 0m7.485s 00:39:28.098 sys 0m5.874s 00:39:28.098 14:07:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:28.098 14:07:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:28.098 ************************************ 00:39:28.098 END TEST kernel_target_abort 00:39:28.098 ************************************ 00:39:28.098 14:07:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:39:28.098 14:07:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:39:28.098 14:07:24 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:28.098 14:07:24 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:39:28.098 14:07:24 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:28.098 14:07:24 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:39:28.098 14:07:24 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:28.098 14:07:24 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:28.098 rmmod nvme_tcp 00:39:28.098 rmmod nvme_fabrics 00:39:28.098 rmmod nvme_keyring 00:39:28.099 14:07:24 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:28.099 14:07:24 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:39:28.099 14:07:24 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:39:28.099 14:07:24 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 548922 ']' 00:39:28.099 14:07:24 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 548922 00:39:28.099 14:07:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 548922 ']' 00:39:28.099 14:07:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 548922 00:39:28.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (548922) - No such process 00:39:28.099 14:07:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 548922 is not found' 00:39:28.099 Process with pid 548922 is not found 00:39:28.099 14:07:24 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:39:28.099 14:07:24 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:31.391 Waiting for block devices as requested 00:39:31.391 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:31.391 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:31.391 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:31.391 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:31.650 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:31.650 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:31.650 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:31.908 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:31.908 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:31.908 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:32.167 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:32.167 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:32.167 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:32.426 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:32.426 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:32.426 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:32.685 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:39:32.685 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:32.685 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:32.685 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:32.685 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:32.685 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:32.685 14:07:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:32.685 14:07:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:35.279 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:35.279 00:39:35.279 real 0m52.806s 00:39:35.279 user 1m10.474s 00:39:35.279 sys 0m18.837s 00:39:35.279 14:07:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:35.279 14:07:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:35.279 ************************************ 00:39:35.279 END TEST nvmf_abort_qd_sizes 00:39:35.279 ************************************ 00:39:35.279 14:07:31 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:35.279 14:07:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:35.279 14:07:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:35.279 14:07:31 -- common/autotest_common.sh@10 -- # set +x 00:39:35.279 ************************************ 00:39:35.279 START TEST keyring_file 00:39:35.279 ************************************ 00:39:35.279 14:07:31 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:35.279 * Looking for test storage... 00:39:35.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:35.279 14:07:31 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:35.279 14:07:31 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:35.279 14:07:31 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:39:35.279 14:07:31 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:35.279 14:07:31 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:35.279 14:07:31 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:35.279 14:07:31 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:35.279 14:07:31 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:35.279 14:07:31 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:35.279 14:07:31 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:35.279 14:07:31 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:35.279 14:07:31 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:35.279 14:07:31 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:35.279 14:07:31 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:39:35.279 14:07:31 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:39:35.279 14:07:31 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:35.279 14:07:31 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:35.279 14:07:31 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:35.279 14:07:31 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:35.279 14:07:31 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:35.279 14:07:31 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:35.279 14:07:31 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:35.279 14:07:31 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:35.279 14:07:31 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.279 14:07:31 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.279 14:07:31 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.279 14:07:31 keyring_file -- paths/export.sh@5 -- # export PATH 00:39:35.279 14:07:31 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.279 14:07:31 keyring_file -- nvmf/common.sh@47 -- # : 0 00:39:35.279 14:07:31 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:35.279 14:07:31 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:35.279 14:07:31 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:35.279 14:07:31 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:35.279 14:07:31 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:35.279 14:07:31 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:35.279 14:07:31 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:35.279 14:07:31 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:35.279 14:07:31 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:35.279 14:07:31 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:35.279 14:07:31 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:35.279 14:07:31 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:39:35.279 14:07:31 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:39:35.279 14:07:31 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:39:35.279 14:07:31 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:35.280 14:07:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:35.280 14:07:31 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:35.280 14:07:31 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:35.280 14:07:31 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:35.280 14:07:31 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:35.280 14:07:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.rljkefDMo7 00:39:35.280 14:07:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:35.280 14:07:31 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:35.280 14:07:31 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:39:35.280 14:07:31 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:35.280 14:07:31 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:39:35.280 14:07:31 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:39:35.280 14:07:31 keyring_file -- nvmf/common.sh@705 -- # python - 00:39:35.280 14:07:31 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.rljkefDMo7 00:39:35.280 14:07:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.rljkefDMo7 00:39:35.280 14:07:31 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.rljkefDMo7 00:39:35.280 14:07:31 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:39:35.280 14:07:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:35.280 14:07:31 keyring_file -- keyring/common.sh@17 -- # name=key1 00:39:35.280 14:07:31 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:35.280 14:07:31 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:35.280 14:07:31 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:35.280 14:07:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lUVqIQ5x4d 00:39:35.280 14:07:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:35.280 14:07:31 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:35.280 14:07:31 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:39:35.280 14:07:31 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:35.280 14:07:31 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:39:35.280 14:07:31 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:39:35.280 14:07:31 keyring_file -- nvmf/common.sh@705 -- # python - 00:39:35.280 14:07:31 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lUVqIQ5x4d 00:39:35.280 14:07:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lUVqIQ5x4d 00:39:35.280 14:07:31 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.lUVqIQ5x4d 00:39:35.280 14:07:31 keyring_file -- keyring/file.sh@30 -- # tgtpid=558093 00:39:35.280 14:07:31 keyring_file -- keyring/file.sh@32 -- # waitforlisten 558093 00:39:35.280 14:07:31 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 558093 ']' 00:39:35.280 14:07:31 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:35.280 14:07:31 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:35.280 14:07:31 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:35.280 14:07:31 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:35.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:35.280 14:07:31 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:35.280 14:07:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:35.280 [2024-07-25 14:07:31.925557] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:39:35.280 [2024-07-25 14:07:31.925614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid558093 ] 00:39:35.280 EAL: No free 2048 kB hugepages reported on node 1 00:39:35.280 [2024-07-25 14:07:31.961847] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:39:35.280 [2024-07-25 14:07:31.996589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:35.280 [2024-07-25 14:07:32.036031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:35.849 14:07:32 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:35.849 14:07:32 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:39:35.849 14:07:32 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:39:35.849 14:07:32 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:35.849 14:07:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:35.849 [2024-07-25 14:07:32.715337] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:35.849 null0 00:39:36.109 [2024-07-25 14:07:32.747394] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:36.109 [2024-07-25 14:07:32.747740] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:36.109 [2024-07-25 14:07:32.755401] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:39:36.109 14:07:32 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:36.109 14:07:32 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:36.109 14:07:32 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:39:36.109 14:07:32 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:36.109 14:07:32 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:39:36.109 14:07:32 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:36.109 14:07:32 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:39:36.109 14:07:32 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:36.109 14:07:32 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:36.109 14:07:32 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:36.109 14:07:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:36.109 [2024-07-25 14:07:32.763420] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:39:36.109 request: 00:39:36.109 { 00:39:36.109 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:39:36.109 "secure_channel": false, 00:39:36.109 "listen_address": { 00:39:36.109 "trtype": "tcp", 00:39:36.109 "traddr": "127.0.0.1", 00:39:36.109 "trsvcid": "4420" 00:39:36.109 }, 00:39:36.109 "method": "nvmf_subsystem_add_listener", 00:39:36.109 "req_id": 1 00:39:36.109 } 00:39:36.109 Got JSON-RPC error response 00:39:36.109 response: 00:39:36.109 { 00:39:36.109 "code": -32602, 00:39:36.109 "message": "Invalid parameters" 00:39:36.109 } 00:39:36.109 14:07:32 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:36.109 14:07:32 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:39:36.109 14:07:32 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:36.109 14:07:32 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:36.109 14:07:32 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:36.109 14:07:32 keyring_file -- keyring/file.sh@46 -- # bperfpid=558101 00:39:36.109 14:07:32 keyring_file -- keyring/file.sh@48 -- # waitforlisten 558101 /var/tmp/bperf.sock 00:39:36.109 14:07:32 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:39:36.109 14:07:32 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 558101 ']' 00:39:36.109 14:07:32 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:36.109 14:07:32 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:36.109 14:07:32 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:36.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:36.109 14:07:32 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:36.109 14:07:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:36.109 [2024-07-25 14:07:32.800912] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:39:36.109 [2024-07-25 14:07:32.800961] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid558101 ] 00:39:36.109 EAL: No free 2048 kB hugepages reported on node 1 00:39:36.109 [2024-07-25 14:07:32.837010] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:39:36.109 [2024-07-25 14:07:32.871753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:36.109 [2024-07-25 14:07:32.910027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:36.109 14:07:32 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:36.109 14:07:32 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:39:36.109 14:07:32 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rljkefDMo7 00:39:36.109 14:07:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rljkefDMo7 00:39:36.369 14:07:33 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.lUVqIQ5x4d 00:39:36.369 14:07:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.lUVqIQ5x4d 00:39:36.629 14:07:33 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:39:36.629 14:07:33 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:39:36.629 14:07:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:36.629 14:07:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:36.629 14:07:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:36.629 14:07:33 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.rljkefDMo7 == \/\t\m\p\/\t\m\p\.\r\l\j\k\e\f\D\M\o\7 ]] 00:39:36.629 14:07:33 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:39:36.629 14:07:33 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:39:36.629 14:07:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:36.629 14:07:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:36.629 14:07:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:36.899 14:07:33 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.lUVqIQ5x4d == \/\t\m\p\/\t\m\p\.\l\U\V\q\I\Q\5\x\4\d ]] 00:39:36.900 14:07:33 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:39:36.900 14:07:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:36.900 14:07:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:36.900 14:07:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:36.900 14:07:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:36.900 14:07:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:37.161 14:07:33 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:39:37.161 14:07:33 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:39:37.161 14:07:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:37.161 14:07:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:37.161 14:07:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:37.161 14:07:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:37.161 14:07:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:37.161 14:07:34 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:39:37.161 14:07:34 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:37.161 14:07:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:37.419 [2024-07-25 14:07:34.200939] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:37.419 nvme0n1 00:39:37.419 14:07:34 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:39:37.419 14:07:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:37.419 14:07:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:37.419 14:07:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:37.419 14:07:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:37.419 14:07:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:37.678 14:07:34 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:39:37.678 14:07:34 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:39:37.678 14:07:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:37.678 14:07:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:37.678 14:07:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:37.678 14:07:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:37.678 14:07:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:37.937 14:07:34 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:39:37.937 14:07:34 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:37.937 Running I/O for 1 seconds... 00:39:38.874 00:39:38.874 Latency(us) 00:39:38.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:38.874 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:39:38.874 nvme0n1 : 1.01 11464.06 44.78 0.00 0.00 11107.79 6973.03 16672.36 00:39:38.874 =================================================================================================================== 00:39:38.874 Total : 11464.06 44.78 0.00 0.00 11107.79 6973.03 16672.36 00:39:38.874 0 00:39:38.874 14:07:35 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:38.874 14:07:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:39.133 14:07:35 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:39:39.133 14:07:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:39.133 14:07:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:39.133 14:07:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:39.133 14:07:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:39.133 14:07:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:39.392 14:07:36 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:39:39.392 14:07:36 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:39:39.392 14:07:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:39.392 14:07:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:39.392 14:07:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:39.392 14:07:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:39.392 14:07:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:39.652 14:07:36 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:39:39.652 14:07:36 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:39.652 14:07:36 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:39:39.652 14:07:36 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:39.652 14:07:36 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:39:39.652 14:07:36 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:39.652 14:07:36 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:39:39.652 14:07:36 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:39.652 14:07:36 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:39.652 14:07:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:39.652 [2024-07-25 14:07:36.443797] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:39.653 [2024-07-25 14:07:36.444496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb9d30 (107): Transport endpoint is not connected 00:39:39.653 [2024-07-25 14:07:36.445489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb9d30 (9): Bad file descriptor 00:39:39.653 [2024-07-25 14:07:36.446490] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:39.653 [2024-07-25 14:07:36.446503] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:39.653 [2024-07-25 14:07:36.446512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:39.653 request: 00:39:39.653 { 00:39:39.653 "name": "nvme0", 00:39:39.653 "trtype": "tcp", 00:39:39.653 "traddr": "127.0.0.1", 00:39:39.653 "adrfam": "ipv4", 00:39:39.653 "trsvcid": "4420", 00:39:39.653 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:39.653 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:39.653 "prchk_reftag": false, 00:39:39.653 "prchk_guard": false, 00:39:39.653 "hdgst": false, 00:39:39.653 "ddgst": false, 00:39:39.653 "psk": "key1", 00:39:39.653 "method": "bdev_nvme_attach_controller", 00:39:39.653 "req_id": 1 00:39:39.653 } 00:39:39.653 Got JSON-RPC error response 00:39:39.653 response: 00:39:39.653 { 00:39:39.653 "code": -5, 00:39:39.653 "message": "Input/output error" 00:39:39.653 } 00:39:39.653 14:07:36 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:39:39.653 14:07:36 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:39.653 14:07:36 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:39.653 14:07:36 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:39.653 14:07:36 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:39:39.653 14:07:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:39.653 14:07:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:39.653 14:07:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:39.653 14:07:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:39.653 14:07:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:39.913 14:07:36 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:39:39.913 14:07:36 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:39:39.913 14:07:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:39.913 14:07:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:39.913 14:07:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:39.913 14:07:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:39.913 14:07:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:40.172 14:07:36 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:39:40.172 14:07:36 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:39:40.172 14:07:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:40.172 14:07:36 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:39:40.172 14:07:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:39:40.431 14:07:37 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:39:40.431 14:07:37 keyring_file -- keyring/file.sh@77 -- # jq length 00:39:40.431 14:07:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:40.431 14:07:37 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:39:40.431 14:07:37 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.rljkefDMo7 00:39:40.431 14:07:37 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.rljkefDMo7 00:39:40.431 14:07:37 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:39:40.431 14:07:37 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.rljkefDMo7 00:39:40.431 14:07:37 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:39:40.431 14:07:37 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:40.431 14:07:37 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:39:40.690 14:07:37 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:40.690 14:07:37 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rljkefDMo7 00:39:40.690 14:07:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rljkefDMo7 00:39:40.690 [2024-07-25 14:07:37.471447] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.rljkefDMo7': 0100660 00:39:40.690 [2024-07-25 14:07:37.471471] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:39:40.690 request: 00:39:40.690 { 00:39:40.690 "name": "key0", 00:39:40.690 "path": "/tmp/tmp.rljkefDMo7", 00:39:40.690 "method": "keyring_file_add_key", 00:39:40.690 "req_id": 1 00:39:40.690 } 00:39:40.690 Got JSON-RPC error response 00:39:40.690 response: 00:39:40.690 { 00:39:40.690 "code": -1, 00:39:40.690 "message": "Operation not permitted" 00:39:40.690 } 00:39:40.690 14:07:37 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:39:40.690 14:07:37 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:40.690 14:07:37 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:40.690 14:07:37 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:40.690 14:07:37 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.rljkefDMo7 00:39:40.690 14:07:37 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rljkefDMo7 00:39:40.690 14:07:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rljkefDMo7 00:39:40.949 14:07:37 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.rljkefDMo7 00:39:40.949 14:07:37 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:39:40.949 14:07:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:40.949 14:07:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:40.949 14:07:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:40.949 14:07:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:40.949 14:07:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:41.209 14:07:37 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:39:41.209 14:07:37 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:41.209 14:07:37 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:39:41.209 14:07:37 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:41.209 14:07:37 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:39:41.210 14:07:37 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:41.210 14:07:37 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:39:41.210 14:07:37 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:41.210 14:07:37 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:41.210 14:07:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:41.210 [2024-07-25 14:07:38.008864] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.rljkefDMo7': No such file or directory 00:39:41.210 [2024-07-25 14:07:38.008888] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:39:41.210 [2024-07-25 14:07:38.008909] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:39:41.210 [2024-07-25 14:07:38.008917] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:41.210 [2024-07-25 14:07:38.008925] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:39:41.210 request: 00:39:41.210 { 00:39:41.210 "name": "nvme0", 00:39:41.210 "trtype": "tcp", 00:39:41.210 "traddr": "127.0.0.1", 00:39:41.210 "adrfam": "ipv4", 00:39:41.210 "trsvcid": "4420", 00:39:41.210 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:41.210 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:41.210 "prchk_reftag": false, 00:39:41.210 "prchk_guard": false, 00:39:41.210 "hdgst": false, 00:39:41.210 "ddgst": false, 00:39:41.210 "psk": "key0", 00:39:41.210 "method": "bdev_nvme_attach_controller", 00:39:41.210 "req_id": 1 00:39:41.210 } 00:39:41.210 Got JSON-RPC error response 00:39:41.210 response: 00:39:41.210 { 00:39:41.210 "code": -19, 00:39:41.210 "message": "No such device" 00:39:41.210 } 00:39:41.210 14:07:38 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:39:41.210 14:07:38 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:41.210 14:07:38 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:41.210 14:07:38 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:41.210 14:07:38 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:39:41.210 14:07:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:41.469 14:07:38 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:41.469 14:07:38 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:41.469 14:07:38 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:41.469 14:07:38 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:41.469 14:07:38 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:41.469 14:07:38 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:41.469 14:07:38 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.eHFZcc6ANh 00:39:41.469 14:07:38 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:41.469 14:07:38 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:41.469 14:07:38 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:39:41.469 14:07:38 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:41.469 14:07:38 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:39:41.469 14:07:38 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:39:41.469 14:07:38 keyring_file -- nvmf/common.sh@705 -- # python - 00:39:41.469 14:07:38 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.eHFZcc6ANh 00:39:41.469 14:07:38 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.eHFZcc6ANh 00:39:41.469 14:07:38 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.eHFZcc6ANh 00:39:41.469 14:07:38 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eHFZcc6ANh 00:39:41.469 14:07:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eHFZcc6ANh 00:39:41.728 14:07:38 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:41.728 14:07:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:41.987 nvme0n1 00:39:41.987 14:07:38 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:39:41.987 14:07:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:41.987 14:07:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:41.987 14:07:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:41.987 14:07:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:41.987 14:07:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:41.987 14:07:38 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:39:41.987 14:07:38 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:39:41.987 14:07:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:42.246 14:07:39 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:39:42.246 14:07:39 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:39:42.246 14:07:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:42.246 14:07:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:42.246 14:07:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:42.505 14:07:39 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:39:42.505 14:07:39 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:39:42.505 14:07:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:42.505 14:07:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:42.505 14:07:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:42.505 14:07:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:42.505 14:07:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:42.505 14:07:39 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:39:42.764 14:07:39 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:42.764 14:07:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:42.764 14:07:39 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:39:42.764 14:07:39 keyring_file -- keyring/file.sh@104 -- # jq length 00:39:42.764 14:07:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:43.023 14:07:39 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:39:43.023 14:07:39 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eHFZcc6ANh 00:39:43.023 14:07:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eHFZcc6ANh 00:39:43.282 14:07:39 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.lUVqIQ5x4d 00:39:43.282 14:07:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.lUVqIQ5x4d 00:39:43.282 14:07:40 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:43.282 14:07:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:43.541 nvme0n1 00:39:43.541 14:07:40 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:39:43.541 14:07:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:39:43.801 14:07:40 keyring_file -- keyring/file.sh@112 -- # config='{ 00:39:43.801 "subsystems": [ 00:39:43.801 { 00:39:43.801 "subsystem": "keyring", 00:39:43.801 "config": [ 00:39:43.801 { 00:39:43.801 "method": "keyring_file_add_key", 00:39:43.801 "params": { 00:39:43.801 "name": "key0", 00:39:43.801 "path": "/tmp/tmp.eHFZcc6ANh" 00:39:43.801 } 00:39:43.801 }, 00:39:43.801 { 00:39:43.801 "method": "keyring_file_add_key", 00:39:43.801 "params": { 00:39:43.801 "name": "key1", 00:39:43.801 "path": "/tmp/tmp.lUVqIQ5x4d" 00:39:43.801 } 00:39:43.801 } 00:39:43.801 ] 00:39:43.801 }, 00:39:43.801 { 00:39:43.801 "subsystem": "iobuf", 00:39:43.801 "config": [ 00:39:43.801 { 00:39:43.801 "method": "iobuf_set_options", 00:39:43.801 "params": { 00:39:43.801 "small_pool_count": 8192, 00:39:43.801 "large_pool_count": 1024, 00:39:43.801 "small_bufsize": 8192, 00:39:43.801 "large_bufsize": 135168 00:39:43.801 } 00:39:43.801 } 00:39:43.801 ] 00:39:43.801 }, 00:39:43.801 { 00:39:43.801 "subsystem": "sock", 00:39:43.801 "config": [ 00:39:43.801 { 00:39:43.801 "method": "sock_set_default_impl", 00:39:43.801 "params": { 00:39:43.801 "impl_name": "posix" 00:39:43.801 } 00:39:43.801 }, 00:39:43.801 { 00:39:43.801 "method": "sock_impl_set_options", 00:39:43.801 "params": { 00:39:43.801 "impl_name": "ssl", 00:39:43.801 "recv_buf_size": 4096, 00:39:43.801 "send_buf_size": 4096, 00:39:43.801 "enable_recv_pipe": true, 00:39:43.801 "enable_quickack": false, 00:39:43.801 "enable_placement_id": 0, 00:39:43.801 "enable_zerocopy_send_server": true, 00:39:43.801 "enable_zerocopy_send_client": false, 00:39:43.801 "zerocopy_threshold": 0, 00:39:43.801 "tls_version": 0, 00:39:43.801 "enable_ktls": false 00:39:43.801 } 00:39:43.801 }, 00:39:43.801 { 00:39:43.801 "method": "sock_impl_set_options", 00:39:43.801 "params": { 00:39:43.801 "impl_name": "posix", 00:39:43.801 "recv_buf_size": 2097152, 00:39:43.801 "send_buf_size": 2097152, 00:39:43.801 "enable_recv_pipe": true, 00:39:43.801 "enable_quickack": false, 00:39:43.801 "enable_placement_id": 0, 00:39:43.801 "enable_zerocopy_send_server": true, 00:39:43.801 "enable_zerocopy_send_client": false, 00:39:43.801 "zerocopy_threshold": 0, 00:39:43.801 "tls_version": 0, 00:39:43.801 "enable_ktls": false 00:39:43.801 } 00:39:43.801 } 00:39:43.801 ] 00:39:43.801 }, 00:39:43.801 { 00:39:43.801 "subsystem": "vmd", 00:39:43.801 "config": [] 00:39:43.801 }, 00:39:43.801 { 00:39:43.801 "subsystem": "accel", 00:39:43.801 "config": [ 00:39:43.801 { 00:39:43.801 "method": "accel_set_options", 00:39:43.801 "params": { 00:39:43.801 "small_cache_size": 128, 00:39:43.801 "large_cache_size": 16, 00:39:43.801 "task_count": 2048, 00:39:43.801 "sequence_count": 2048, 00:39:43.801 "buf_count": 2048 00:39:43.801 } 00:39:43.801 } 00:39:43.801 ] 00:39:43.801 }, 00:39:43.801 { 00:39:43.801 "subsystem": "bdev", 00:39:43.801 "config": [ 00:39:43.801 { 00:39:43.801 "method": "bdev_set_options", 00:39:43.801 "params": { 00:39:43.801 "bdev_io_pool_size": 65535, 00:39:43.801 "bdev_io_cache_size": 256, 00:39:43.801 "bdev_auto_examine": true, 00:39:43.801 "iobuf_small_cache_size": 128, 00:39:43.801 "iobuf_large_cache_size": 16 00:39:43.801 } 00:39:43.801 }, 00:39:43.801 { 00:39:43.801 "method": "bdev_raid_set_options", 00:39:43.801 "params": { 00:39:43.801 "process_window_size_kb": 1024, 00:39:43.801 "process_max_bandwidth_mb_sec": 0 00:39:43.801 } 00:39:43.801 }, 00:39:43.801 { 00:39:43.801 "method": "bdev_iscsi_set_options", 00:39:43.801 "params": { 00:39:43.802 "timeout_sec": 30 00:39:43.802 } 00:39:43.802 }, 00:39:43.802 { 00:39:43.802 "method": "bdev_nvme_set_options", 00:39:43.802 "params": { 00:39:43.802 "action_on_timeout": "none", 00:39:43.802 "timeout_us": 0, 00:39:43.802 "timeout_admin_us": 0, 00:39:43.802 "keep_alive_timeout_ms": 10000, 00:39:43.802 "arbitration_burst": 0, 00:39:43.802 "low_priority_weight": 0, 00:39:43.802 "medium_priority_weight": 0, 00:39:43.802 "high_priority_weight": 0, 00:39:43.802 "nvme_adminq_poll_period_us": 10000, 00:39:43.802 "nvme_ioq_poll_period_us": 0, 00:39:43.802 "io_queue_requests": 512, 00:39:43.802 "delay_cmd_submit": true, 00:39:43.802 "transport_retry_count": 4, 00:39:43.802 "bdev_retry_count": 3, 00:39:43.802 "transport_ack_timeout": 0, 00:39:43.802 "ctrlr_loss_timeout_sec": 0, 00:39:43.802 "reconnect_delay_sec": 0, 00:39:43.802 "fast_io_fail_timeout_sec": 0, 00:39:43.802 "disable_auto_failback": false, 00:39:43.802 "generate_uuids": false, 00:39:43.802 "transport_tos": 0, 00:39:43.802 "nvme_error_stat": false, 00:39:43.802 "rdma_srq_size": 0, 00:39:43.802 "io_path_stat": false, 00:39:43.802 "allow_accel_sequence": false, 00:39:43.802 "rdma_max_cq_size": 0, 00:39:43.802 "rdma_cm_event_timeout_ms": 0, 00:39:43.802 "dhchap_digests": [ 00:39:43.802 "sha256", 00:39:43.802 "sha384", 00:39:43.802 "sha512" 00:39:43.802 ], 00:39:43.802 "dhchap_dhgroups": [ 00:39:43.802 "null", 00:39:43.802 "ffdhe2048", 00:39:43.802 "ffdhe3072", 00:39:43.802 "ffdhe4096", 00:39:43.802 "ffdhe6144", 00:39:43.802 "ffdhe8192" 00:39:43.802 ] 00:39:43.802 } 00:39:43.802 }, 00:39:43.802 { 00:39:43.802 "method": "bdev_nvme_attach_controller", 00:39:43.802 "params": { 00:39:43.802 "name": "nvme0", 00:39:43.802 "trtype": "TCP", 00:39:43.802 "adrfam": "IPv4", 00:39:43.802 "traddr": "127.0.0.1", 00:39:43.802 "trsvcid": "4420", 00:39:43.802 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:43.802 "prchk_reftag": false, 00:39:43.802 "prchk_guard": false, 00:39:43.802 "ctrlr_loss_timeout_sec": 0, 00:39:43.802 "reconnect_delay_sec": 0, 00:39:43.802 "fast_io_fail_timeout_sec": 0, 00:39:43.802 "psk": "key0", 00:39:43.802 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:43.802 "hdgst": false, 00:39:43.802 "ddgst": false 00:39:43.802 } 00:39:43.802 }, 00:39:43.802 { 00:39:43.802 "method": "bdev_nvme_set_hotplug", 00:39:43.802 "params": { 00:39:43.802 "period_us": 100000, 00:39:43.802 "enable": false 00:39:43.802 } 00:39:43.802 }, 00:39:43.802 { 00:39:43.802 "method": "bdev_wait_for_examine" 00:39:43.802 } 00:39:43.802 ] 00:39:43.802 }, 00:39:43.802 { 00:39:43.802 "subsystem": "nbd", 00:39:43.802 "config": [] 00:39:43.802 } 00:39:43.802 ] 00:39:43.802 }' 00:39:43.802 14:07:40 keyring_file -- keyring/file.sh@114 -- # killprocess 558101 00:39:43.802 14:07:40 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 558101 ']' 00:39:43.802 14:07:40 keyring_file -- common/autotest_common.sh@954 -- # kill -0 558101 00:39:43.802 14:07:40 keyring_file -- common/autotest_common.sh@955 -- # uname 00:39:43.802 14:07:40 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:43.802 14:07:40 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 558101 00:39:43.802 14:07:40 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:43.802 14:07:40 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:43.802 14:07:40 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 558101' 00:39:43.802 killing process with pid 558101 00:39:43.802 14:07:40 keyring_file -- common/autotest_common.sh@969 -- # kill 558101 00:39:43.802 Received shutdown signal, test time was about 1.000000 seconds 00:39:43.802 00:39:43.802 Latency(us) 00:39:43.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:43.802 =================================================================================================================== 00:39:43.802 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:43.802 14:07:40 keyring_file -- common/autotest_common.sh@974 -- # wait 558101 00:39:44.061 14:07:40 keyring_file -- keyring/file.sh@117 -- # bperfpid=559554 00:39:44.061 14:07:40 keyring_file -- keyring/file.sh@119 -- # waitforlisten 559554 /var/tmp/bperf.sock 00:39:44.061 14:07:40 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 559554 ']' 00:39:44.061 14:07:40 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:44.061 14:07:40 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:44.062 14:07:40 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:39:44.062 14:07:40 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:44.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:44.062 14:07:40 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:44.062 14:07:40 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:39:44.062 "subsystems": [ 00:39:44.062 { 00:39:44.062 "subsystem": "keyring", 00:39:44.062 "config": [ 00:39:44.062 { 00:39:44.062 "method": "keyring_file_add_key", 00:39:44.062 "params": { 00:39:44.062 "name": "key0", 00:39:44.062 "path": "/tmp/tmp.eHFZcc6ANh" 00:39:44.062 } 00:39:44.062 }, 00:39:44.062 { 00:39:44.062 "method": "keyring_file_add_key", 00:39:44.062 "params": { 00:39:44.062 "name": "key1", 00:39:44.062 "path": "/tmp/tmp.lUVqIQ5x4d" 00:39:44.062 } 00:39:44.062 } 00:39:44.062 ] 00:39:44.062 }, 00:39:44.062 { 00:39:44.062 "subsystem": "iobuf", 00:39:44.062 "config": [ 00:39:44.062 { 00:39:44.062 "method": "iobuf_set_options", 00:39:44.062 "params": { 00:39:44.062 "small_pool_count": 8192, 00:39:44.062 "large_pool_count": 1024, 00:39:44.062 "small_bufsize": 8192, 00:39:44.062 "large_bufsize": 135168 00:39:44.062 } 00:39:44.062 } 00:39:44.062 ] 00:39:44.062 }, 00:39:44.062 { 00:39:44.062 "subsystem": "sock", 00:39:44.062 "config": [ 00:39:44.062 { 00:39:44.062 "method": "sock_set_default_impl", 00:39:44.062 "params": { 00:39:44.062 "impl_name": "posix" 00:39:44.062 } 00:39:44.062 }, 00:39:44.062 { 00:39:44.062 "method": "sock_impl_set_options", 00:39:44.062 "params": { 00:39:44.062 "impl_name": "ssl", 00:39:44.062 "recv_buf_size": 4096, 00:39:44.062 "send_buf_size": 4096, 00:39:44.062 "enable_recv_pipe": true, 00:39:44.062 "enable_quickack": false, 00:39:44.062 "enable_placement_id": 0, 00:39:44.062 "enable_zerocopy_send_server": true, 00:39:44.062 "enable_zerocopy_send_client": false, 00:39:44.062 "zerocopy_threshold": 0, 00:39:44.062 "tls_version": 0, 00:39:44.062 "enable_ktls": false 00:39:44.062 } 00:39:44.062 }, 00:39:44.062 { 00:39:44.062 "method": "sock_impl_set_options", 00:39:44.062 "params": { 00:39:44.062 "impl_name": "posix", 00:39:44.062 "recv_buf_size": 2097152, 00:39:44.062 "send_buf_size": 2097152, 00:39:44.062 "enable_recv_pipe": true, 00:39:44.062 "enable_quickack": false, 00:39:44.062 "enable_placement_id": 0, 00:39:44.062 "enable_zerocopy_send_server": true, 00:39:44.062 "enable_zerocopy_send_client": false, 00:39:44.062 "zerocopy_threshold": 0, 00:39:44.062 "tls_version": 0, 00:39:44.062 "enable_ktls": false 00:39:44.062 } 00:39:44.062 } 00:39:44.062 ] 00:39:44.062 }, 00:39:44.062 { 00:39:44.062 "subsystem": "vmd", 00:39:44.062 "config": [] 00:39:44.062 }, 00:39:44.062 { 00:39:44.062 "subsystem": "accel", 00:39:44.062 "config": [ 00:39:44.062 { 00:39:44.062 "method": "accel_set_options", 00:39:44.062 "params": { 00:39:44.062 "small_cache_size": 128, 00:39:44.062 "large_cache_size": 16, 00:39:44.062 "task_count": 2048, 00:39:44.062 "sequence_count": 2048, 00:39:44.062 "buf_count": 2048 00:39:44.062 } 00:39:44.062 } 00:39:44.062 ] 00:39:44.062 }, 00:39:44.062 { 00:39:44.062 "subsystem": "bdev", 00:39:44.062 "config": [ 00:39:44.062 { 00:39:44.062 "method": "bdev_set_options", 00:39:44.062 "params": { 00:39:44.062 "bdev_io_pool_size": 65535, 00:39:44.062 "bdev_io_cache_size": 256, 00:39:44.062 "bdev_auto_examine": true, 00:39:44.062 "iobuf_small_cache_size": 128, 00:39:44.062 "iobuf_large_cache_size": 16 00:39:44.062 } 00:39:44.062 }, 00:39:44.062 { 00:39:44.062 "method": "bdev_raid_set_options", 00:39:44.062 "params": { 00:39:44.062 "process_window_size_kb": 1024, 00:39:44.062 "process_max_bandwidth_mb_sec": 0 00:39:44.062 } 00:39:44.062 }, 00:39:44.062 { 00:39:44.062 "method": "bdev_iscsi_set_options", 00:39:44.062 "params": { 00:39:44.062 "timeout_sec": 30 00:39:44.062 } 00:39:44.062 }, 00:39:44.062 { 00:39:44.062 "method": "bdev_nvme_set_options", 00:39:44.062 "params": { 00:39:44.062 "action_on_timeout": "none", 00:39:44.062 "timeout_us": 0, 00:39:44.062 "timeout_admin_us": 0, 00:39:44.062 "keep_alive_timeout_ms": 10000, 00:39:44.062 "arbitration_burst": 0, 00:39:44.062 "low_priority_weight": 0, 00:39:44.062 "medium_priority_weight": 0, 00:39:44.062 "high_priority_weight": 0, 00:39:44.062 "nvme_adminq_poll_period_us": 10000, 00:39:44.062 "nvme_ioq_poll_period_us": 0, 00:39:44.062 "io_queue_requests": 512, 00:39:44.062 "delay_cmd_submit": true, 00:39:44.062 "transport_retry_count": 4, 00:39:44.062 "bdev_retry_count": 3, 00:39:44.062 "transport_ack_timeout": 0, 00:39:44.062 "ctrlr_loss_timeout_sec": 0, 00:39:44.062 "reconnect_delay_sec": 0, 00:39:44.062 "fast_io_fail_timeout_sec": 0, 00:39:44.062 "disable_auto_failback": false, 00:39:44.062 "generate_uuids": false, 00:39:44.062 "transport_tos": 0, 00:39:44.062 "nvme_error_stat": false, 00:39:44.062 "rdma_srq_size": 0, 00:39:44.062 "io_path_stat": false, 00:39:44.062 "allow_accel_sequence": false, 00:39:44.062 "rdma_max_cq_size": 0, 00:39:44.062 "rdma_cm_event_timeout_ms": 0, 00:39:44.062 "dhchap_digests": [ 00:39:44.062 "sha256", 00:39:44.062 "sha384", 00:39:44.062 "sha512" 00:39:44.062 ], 00:39:44.062 "dhchap_dhgroups": [ 00:39:44.062 "null", 00:39:44.062 "ffdhe2048", 00:39:44.062 "ffdhe3072", 00:39:44.062 "ffdhe4096", 00:39:44.062 "ffdhe6144", 00:39:44.062 "ffdhe8192" 00:39:44.062 ] 00:39:44.062 } 00:39:44.062 }, 00:39:44.062 { 00:39:44.062 "method": "bdev_nvme_attach_controller", 00:39:44.062 "params": { 00:39:44.062 "name": "nvme0", 00:39:44.062 "trtype": "TCP", 00:39:44.062 "adrfam": "IPv4", 00:39:44.062 "traddr": "127.0.0.1", 00:39:44.062 "trsvcid": "4420", 00:39:44.062 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:44.062 "prchk_reftag": false, 00:39:44.062 "prchk_guard": false, 00:39:44.062 "ctrlr_loss_timeout_sec": 0, 00:39:44.062 "reconnect_delay_sec": 0, 00:39:44.062 "fast_io_fail_timeout_sec": 0, 00:39:44.062 "psk": "key0", 00:39:44.062 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:44.062 "hdgst": false, 00:39:44.062 "ddgst": false 00:39:44.062 } 00:39:44.062 }, 00:39:44.062 { 00:39:44.062 "method": "bdev_nvme_set_hotplug", 00:39:44.062 "params": { 00:39:44.062 "period_us": 100000, 00:39:44.062 "enable": false 00:39:44.062 } 00:39:44.062 }, 00:39:44.062 { 00:39:44.062 "method": "bdev_wait_for_examine" 00:39:44.062 } 00:39:44.062 ] 00:39:44.062 }, 00:39:44.062 { 00:39:44.062 "subsystem": "nbd", 00:39:44.062 "config": [] 00:39:44.062 } 00:39:44.062 ] 00:39:44.062 }' 00:39:44.062 14:07:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:44.062 [2024-07-25 14:07:40.829459] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:39:44.062 [2024-07-25 14:07:40.829514] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid559554 ] 00:39:44.062 EAL: No free 2048 kB hugepages reported on node 1 00:39:44.062 [2024-07-25 14:07:40.863586] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:39:44.062 [2024-07-25 14:07:40.900116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:44.062 [2024-07-25 14:07:40.937582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:44.322 [2024-07-25 14:07:41.092032] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:44.889 14:07:41 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:44.889 14:07:41 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:39:44.889 14:07:41 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:39:44.889 14:07:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:44.889 14:07:41 keyring_file -- keyring/file.sh@120 -- # jq length 00:39:45.148 14:07:41 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:39:45.148 14:07:41 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:39:45.148 14:07:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:45.148 14:07:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:45.148 14:07:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:45.148 14:07:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:45.148 14:07:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:45.148 14:07:41 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:39:45.148 14:07:41 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:39:45.148 14:07:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:45.148 14:07:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:45.148 14:07:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:45.148 14:07:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:45.148 14:07:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:45.406 14:07:42 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:39:45.406 14:07:42 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:39:45.406 14:07:42 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:39:45.406 14:07:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:39:45.665 14:07:42 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:39:45.665 14:07:42 keyring_file -- keyring/file.sh@1 -- # cleanup 00:39:45.665 14:07:42 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.eHFZcc6ANh /tmp/tmp.lUVqIQ5x4d 00:39:45.665 14:07:42 keyring_file -- keyring/file.sh@20 -- # killprocess 559554 00:39:45.665 14:07:42 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 559554 ']' 00:39:45.665 14:07:42 keyring_file -- common/autotest_common.sh@954 -- # kill -0 559554 00:39:45.665 14:07:42 keyring_file -- common/autotest_common.sh@955 -- # uname 00:39:45.665 14:07:42 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:45.665 14:07:42 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 559554 00:39:45.665 14:07:42 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:45.665 14:07:42 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:45.665 14:07:42 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 559554' 00:39:45.665 killing process with pid 559554 00:39:45.665 14:07:42 keyring_file -- common/autotest_common.sh@969 -- # kill 559554 00:39:45.665 Received shutdown signal, test time was about 1.000000 seconds 00:39:45.665 00:39:45.665 Latency(us) 00:39:45.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:45.665 =================================================================================================================== 00:39:45.665 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:45.665 14:07:42 keyring_file -- common/autotest_common.sh@974 -- # wait 559554 00:39:45.924 14:07:42 keyring_file -- keyring/file.sh@21 -- # killprocess 558093 00:39:45.924 14:07:42 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 558093 ']' 00:39:45.924 14:07:42 keyring_file -- common/autotest_common.sh@954 -- # kill -0 558093 00:39:45.924 14:07:42 keyring_file -- common/autotest_common.sh@955 -- # uname 00:39:45.924 14:07:42 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:45.924 14:07:42 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 558093 00:39:45.924 14:07:42 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:45.924 14:07:42 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:45.924 14:07:42 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 558093' 00:39:45.924 killing process with pid 558093 00:39:45.924 14:07:42 keyring_file -- common/autotest_common.sh@969 -- # kill 558093 00:39:45.924 [2024-07-25 14:07:42.622465] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:39:45.924 14:07:42 keyring_file -- common/autotest_common.sh@974 -- # wait 558093 00:39:46.182 00:39:46.182 real 0m11.281s 00:39:46.182 user 0m26.147s 00:39:46.182 sys 0m3.280s 00:39:46.182 14:07:42 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:46.182 14:07:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:46.182 ************************************ 00:39:46.182 END TEST keyring_file 00:39:46.182 ************************************ 00:39:46.182 14:07:42 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:39:46.182 14:07:42 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:46.182 14:07:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:46.182 14:07:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:46.182 14:07:42 -- common/autotest_common.sh@10 -- # set +x 00:39:46.182 ************************************ 00:39:46.182 START TEST keyring_linux 00:39:46.182 ************************************ 00:39:46.182 14:07:43 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:46.442 * Looking for test storage... 00:39:46.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:46.442 14:07:43 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:46.442 14:07:43 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:46.442 14:07:43 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:39:46.442 14:07:43 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:46.442 14:07:43 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:46.442 14:07:43 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:46.442 14:07:43 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:46.442 14:07:43 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:46.442 14:07:43 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:46.443 14:07:43 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:46.443 14:07:43 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:46.443 14:07:43 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:46.443 14:07:43 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:46.443 14:07:43 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:46.443 14:07:43 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:46.443 14:07:43 keyring_linux -- paths/export.sh@5 -- # export PATH 00:39:46.443 14:07:43 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:46.443 14:07:43 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:46.443 14:07:43 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:46.443 14:07:43 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:46.443 14:07:43 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:39:46.443 14:07:43 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:39:46.443 14:07:43 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:39:46.443 14:07:43 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:39:46.443 14:07:43 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:46.443 14:07:43 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:39:46.443 14:07:43 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:46.443 14:07:43 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:46.443 14:07:43 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:39:46.443 14:07:43 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@705 -- # python - 00:39:46.443 14:07:43 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:39:46.443 14:07:43 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:39:46.443 /tmp/:spdk-test:key0 00:39:46.443 14:07:43 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:39:46.443 14:07:43 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:46.443 14:07:43 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:39:46.443 14:07:43 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:46.443 14:07:43 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:46.443 14:07:43 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:39:46.443 14:07:43 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:39:46.443 14:07:43 keyring_linux -- nvmf/common.sh@705 -- # python - 00:39:46.443 14:07:43 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:39:46.443 14:07:43 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:39:46.443 /tmp/:spdk-test:key1 00:39:46.443 14:07:43 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=560120 00:39:46.443 14:07:43 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 560120 00:39:46.443 14:07:43 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:46.443 14:07:43 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 560120 ']' 00:39:46.443 14:07:43 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:46.443 14:07:43 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:46.443 14:07:43 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:46.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:46.443 14:07:43 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:46.443 14:07:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:46.443 [2024-07-25 14:07:43.286582] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:39:46.443 [2024-07-25 14:07:43.286637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid560120 ] 00:39:46.443 EAL: No free 2048 kB hugepages reported on node 1 00:39:46.443 [2024-07-25 14:07:43.321248] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:39:46.703 [2024-07-25 14:07:43.357042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:46.703 [2024-07-25 14:07:43.396712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:47.272 14:07:44 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:47.272 14:07:44 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:39:47.272 14:07:44 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:39:47.272 14:07:44 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:47.272 14:07:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:47.272 [2024-07-25 14:07:44.079620] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:47.272 null0 00:39:47.272 [2024-07-25 14:07:44.111682] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:47.272 [2024-07-25 14:07:44.112036] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:47.272 14:07:44 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:47.272 14:07:44 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:39:47.272 91132419 00:39:47.272 14:07:44 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:39:47.272 561947560 00:39:47.272 14:07:44 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=560181 00:39:47.272 14:07:44 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 560181 /var/tmp/bperf.sock 00:39:47.272 14:07:44 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 560181 ']' 00:39:47.272 14:07:44 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:47.272 14:07:44 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:47.272 14:07:44 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:47.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:47.272 14:07:44 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:47.272 14:07:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:47.272 14:07:44 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:39:47.531 [2024-07-25 14:07:44.184611] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:39:47.531 [2024-07-25 14:07:44.184660] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid560181 ] 00:39:47.531 EAL: No free 2048 kB hugepages reported on node 1 00:39:47.531 [2024-07-25 14:07:44.219493] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:39:47.531 [2024-07-25 14:07:44.253594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:47.531 [2024-07-25 14:07:44.291239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:48.099 14:07:44 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:48.099 14:07:44 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:39:48.099 14:07:44 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:39:48.099 14:07:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:39:48.358 14:07:45 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:39:48.358 14:07:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:48.618 14:07:45 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:48.618 14:07:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:48.918 [2024-07-25 14:07:45.518797] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:48.918 nvme0n1 00:39:48.918 14:07:45 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:39:48.918 14:07:45 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:39:48.918 14:07:45 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:48.918 14:07:45 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:48.918 14:07:45 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:48.918 14:07:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:48.918 14:07:45 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:39:48.918 14:07:45 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:48.918 14:07:45 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:39:48.918 14:07:45 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:39:48.918 14:07:45 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:48.918 14:07:45 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:39:48.918 14:07:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:49.178 14:07:45 keyring_linux -- keyring/linux.sh@25 -- # sn=91132419 00:39:49.178 14:07:45 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:39:49.178 14:07:45 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:49.178 14:07:45 keyring_linux -- keyring/linux.sh@26 -- # [[ 91132419 == \9\1\1\3\2\4\1\9 ]] 00:39:49.178 14:07:45 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 91132419 00:39:49.178 14:07:45 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:39:49.178 14:07:45 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:49.178 Running I/O for 1 seconds... 00:39:50.557 00:39:50.557 Latency(us) 00:39:50.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:50.557 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:50.557 nvme0n1 : 1.01 12655.62 49.44 0.00 0.00 10073.18 5531.24 16462.64 00:39:50.557 =================================================================================================================== 00:39:50.557 Total : 12655.62 49.44 0.00 0.00 10073.18 5531.24 16462.64 00:39:50.557 0 00:39:50.557 14:07:47 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:50.557 14:07:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:50.557 14:07:47 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:39:50.557 14:07:47 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:39:50.557 14:07:47 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:50.557 14:07:47 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:50.557 14:07:47 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:50.557 14:07:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:50.557 14:07:47 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:39:50.557 14:07:47 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:50.557 14:07:47 keyring_linux -- keyring/linux.sh@23 -- # return 00:39:50.557 14:07:47 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:50.557 14:07:47 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:39:50.557 14:07:47 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:50.557 14:07:47 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:39:50.557 14:07:47 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:50.557 14:07:47 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:39:50.557 14:07:47 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:50.557 14:07:47 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:50.557 14:07:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:50.816 [2024-07-25 14:07:47.574396] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:50.816 [2024-07-25 14:07:47.575148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9cc50 (107): Transport endpoint is not connected 00:39:50.816 [2024-07-25 14:07:47.576142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9cc50 (9): Bad file descriptor 00:39:50.816 [2024-07-25 14:07:47.577143] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:50.816 [2024-07-25 14:07:47.577155] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:50.816 [2024-07-25 14:07:47.577165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:50.816 request: 00:39:50.816 { 00:39:50.816 "name": "nvme0", 00:39:50.816 "trtype": "tcp", 00:39:50.816 "traddr": "127.0.0.1", 00:39:50.816 "adrfam": "ipv4", 00:39:50.816 "trsvcid": "4420", 00:39:50.816 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:50.816 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:50.816 "prchk_reftag": false, 00:39:50.816 "prchk_guard": false, 00:39:50.816 "hdgst": false, 00:39:50.816 "ddgst": false, 00:39:50.816 "psk": ":spdk-test:key1", 00:39:50.816 "method": "bdev_nvme_attach_controller", 00:39:50.816 "req_id": 1 00:39:50.816 } 00:39:50.816 Got JSON-RPC error response 00:39:50.816 response: 00:39:50.816 { 00:39:50.816 "code": -5, 00:39:50.816 "message": "Input/output error" 00:39:50.816 } 00:39:50.816 14:07:47 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:39:50.816 14:07:47 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:50.816 14:07:47 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:50.816 14:07:47 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:50.816 14:07:47 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:39:50.816 14:07:47 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:50.816 14:07:47 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:39:50.816 14:07:47 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:39:50.816 14:07:47 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:39:50.816 14:07:47 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:50.816 14:07:47 keyring_linux -- keyring/linux.sh@33 -- # sn=91132419 00:39:50.816 14:07:47 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 91132419 00:39:50.816 1 links removed 00:39:50.816 14:07:47 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:50.816 14:07:47 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:39:50.816 14:07:47 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:39:50.816 14:07:47 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:39:50.816 14:07:47 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:39:50.816 14:07:47 keyring_linux -- keyring/linux.sh@33 -- # sn=561947560 00:39:50.816 14:07:47 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 561947560 00:39:50.816 1 links removed 00:39:50.816 14:07:47 keyring_linux -- keyring/linux.sh@41 -- # killprocess 560181 00:39:50.816 14:07:47 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 560181 ']' 00:39:50.816 14:07:47 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 560181 00:39:50.816 14:07:47 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:39:50.816 14:07:47 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:50.816 14:07:47 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 560181 00:39:50.816 14:07:47 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:50.816 14:07:47 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:50.816 14:07:47 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 560181' 00:39:50.816 killing process with pid 560181 00:39:50.816 14:07:47 keyring_linux -- common/autotest_common.sh@969 -- # kill 560181 00:39:50.816 Received shutdown signal, test time was about 1.000000 seconds 00:39:50.816 00:39:50.816 Latency(us) 00:39:50.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:50.816 =================================================================================================================== 00:39:50.816 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:50.816 14:07:47 keyring_linux -- common/autotest_common.sh@974 -- # wait 560181 00:39:51.076 14:07:47 keyring_linux -- keyring/linux.sh@42 -- # killprocess 560120 00:39:51.076 14:07:47 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 560120 ']' 00:39:51.076 14:07:47 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 560120 00:39:51.076 14:07:47 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:39:51.076 14:07:47 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:51.076 14:07:47 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 560120 00:39:51.076 14:07:47 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:51.076 14:07:47 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:51.076 14:07:47 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 560120' 00:39:51.076 killing process with pid 560120 00:39:51.076 14:07:47 keyring_linux -- common/autotest_common.sh@969 -- # kill 560120 00:39:51.076 14:07:47 keyring_linux -- common/autotest_common.sh@974 -- # wait 560120 00:39:51.336 00:39:51.336 real 0m5.185s 00:39:51.336 user 0m8.849s 00:39:51.336 sys 0m1.646s 00:39:51.336 14:07:48 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:51.336 14:07:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:51.336 ************************************ 00:39:51.336 END TEST keyring_linux 00:39:51.336 ************************************ 00:39:51.595 14:07:48 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:39:51.595 14:07:48 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:39:51.595 14:07:48 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:39:51.595 14:07:48 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:39:51.595 14:07:48 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:39:51.595 14:07:48 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:39:51.595 14:07:48 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:39:51.595 14:07:48 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:39:51.595 14:07:48 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:39:51.595 14:07:48 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:39:51.595 14:07:48 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:39:51.595 14:07:48 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:39:51.595 14:07:48 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:39:51.595 14:07:48 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:39:51.595 14:07:48 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:39:51.595 14:07:48 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:39:51.595 14:07:48 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:39:51.595 14:07:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:51.595 14:07:48 -- common/autotest_common.sh@10 -- # set +x 00:39:51.595 14:07:48 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:39:51.595 14:07:48 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:39:51.595 14:07:48 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:39:51.595 14:07:48 -- common/autotest_common.sh@10 -- # set +x 00:39:58.162 INFO: APP EXITING 00:39:58.162 INFO: killing all VMs 00:39:58.162 INFO: killing vhost app 00:39:58.162 INFO: EXIT DONE 00:40:00.696 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:40:00.696 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:40:00.696 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:40:00.696 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:40:00.696 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:40:00.696 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:40:00.696 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:40:00.696 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:40:00.696 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:40:00.696 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:40:00.956 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:40:00.956 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:40:00.956 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:40:00.956 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:40:00.956 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:40:00.956 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:40:00.956 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:40:04.250 Cleaning 00:40:04.250 Removing: /var/run/dpdk/spdk0/config 00:40:04.250 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:40:04.250 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:40:04.250 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:40:04.250 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:40:04.250 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:40:04.250 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:40:04.251 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:40:04.251 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:40:04.251 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:40:04.251 Removing: /var/run/dpdk/spdk0/hugepage_info 00:40:04.251 Removing: /var/run/dpdk/spdk1/config 00:40:04.251 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:40:04.251 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:40:04.251 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:40:04.251 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:40:04.251 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:40:04.251 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:40:04.251 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:40:04.251 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:40:04.251 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:40:04.251 Removing: /var/run/dpdk/spdk1/hugepage_info 00:40:04.251 Removing: /var/run/dpdk/spdk1/mp_socket 00:40:04.251 Removing: /var/run/dpdk/spdk2/config 00:40:04.251 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:40:04.251 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:40:04.251 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:40:04.251 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:40:04.251 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:40:04.251 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:40:04.509 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:40:04.509 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:40:04.509 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:40:04.509 Removing: /var/run/dpdk/spdk2/hugepage_info 00:40:04.510 Removing: /var/run/dpdk/spdk3/config 00:40:04.510 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:40:04.510 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:40:04.510 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:40:04.510 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:40:04.510 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:40:04.510 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:40:04.510 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:40:04.510 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:40:04.510 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:40:04.510 Removing: /var/run/dpdk/spdk3/hugepage_info 00:40:04.510 Removing: /var/run/dpdk/spdk4/config 00:40:04.510 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:40:04.510 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:40:04.510 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:40:04.510 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:40:04.510 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:40:04.510 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:40:04.510 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:40:04.510 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:40:04.510 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:40:04.510 Removing: /var/run/dpdk/spdk4/hugepage_info 00:40:04.510 Removing: /dev/shm/bdev_svc_trace.1 00:40:04.510 Removing: /dev/shm/nvmf_trace.0 00:40:04.510 Removing: /dev/shm/spdk_tgt_trace.pid74907 00:40:04.510 Removing: /var/run/dpdk/spdk0 00:40:04.510 Removing: /var/run/dpdk/spdk1 00:40:04.510 Removing: /var/run/dpdk/spdk2 00:40:04.510 Removing: /var/run/dpdk/spdk3 00:40:04.510 Removing: /var/run/dpdk/spdk4 00:40:04.510 Removing: /var/run/dpdk/spdk_pid110391 00:40:04.510 Removing: /var/run/dpdk/spdk_pid111067 00:40:04.510 Removing: /var/run/dpdk/spdk_pid115602 00:40:04.510 Removing: /var/run/dpdk/spdk_pid115901 00:40:04.510 Removing: /var/run/dpdk/spdk_pid120555 00:40:04.510 Removing: /var/run/dpdk/spdk_pid127079 00:40:04.510 Removing: /var/run/dpdk/spdk_pid129784 00:40:04.510 Removing: /var/run/dpdk/spdk_pid140525 00:40:04.510 Removing: /var/run/dpdk/spdk_pid150060 00:40:04.510 Removing: /var/run/dpdk/spdk_pid151810 00:40:04.510 Removing: /var/run/dpdk/spdk_pid152854 00:40:04.510 Removing: /var/run/dpdk/spdk_pid171319 00:40:04.510 Removing: /var/run/dpdk/spdk_pid175345 00:40:04.510 Removing: /var/run/dpdk/spdk_pid259256 00:40:04.510 Removing: /var/run/dpdk/spdk_pid264832 00:40:04.510 Removing: /var/run/dpdk/spdk_pid270799 00:40:04.510 Removing: /var/run/dpdk/spdk_pid277036 00:40:04.510 Removing: /var/run/dpdk/spdk_pid277041 00:40:04.510 Removing: /var/run/dpdk/spdk_pid277858 00:40:04.510 Removing: /var/run/dpdk/spdk_pid278872 00:40:04.510 Removing: /var/run/dpdk/spdk_pid279669 00:40:04.510 Removing: /var/run/dpdk/spdk_pid280205 00:40:04.510 Removing: /var/run/dpdk/spdk_pid280238 00:40:04.510 Removing: /var/run/dpdk/spdk_pid280474 00:40:04.510 Removing: /var/run/dpdk/spdk_pid280730 00:40:04.510 Removing: /var/run/dpdk/spdk_pid280733 00:40:04.510 Removing: /var/run/dpdk/spdk_pid281530 00:40:04.769 Removing: /var/run/dpdk/spdk_pid282461 00:40:04.769 Removing: /var/run/dpdk/spdk_pid283369 00:40:04.769 Removing: /var/run/dpdk/spdk_pid283899 00:40:04.769 Removing: /var/run/dpdk/spdk_pid283901 00:40:04.769 Removing: /var/run/dpdk/spdk_pid284171 00:40:04.769 Removing: /var/run/dpdk/spdk_pid285463 00:40:04.769 Removing: /var/run/dpdk/spdk_pid286422 00:40:04.769 Removing: /var/run/dpdk/spdk_pid295328 00:40:04.769 Removing: /var/run/dpdk/spdk_pid320363 00:40:04.769 Removing: /var/run/dpdk/spdk_pid325132 00:40:04.769 Removing: /var/run/dpdk/spdk_pid326732 00:40:04.769 Removing: /var/run/dpdk/spdk_pid329127 00:40:04.769 Removing: /var/run/dpdk/spdk_pid329296 00:40:04.769 Removing: /var/run/dpdk/spdk_pid329407 00:40:04.769 Removing: /var/run/dpdk/spdk_pid329438 00:40:04.769 Removing: /var/run/dpdk/spdk_pid330001 00:40:04.769 Removing: /var/run/dpdk/spdk_pid331836 00:40:04.769 Removing: /var/run/dpdk/spdk_pid332699 00:40:04.769 Removing: /var/run/dpdk/spdk_pid333073 00:40:04.769 Removing: /var/run/dpdk/spdk_pid335406 00:40:04.769 Removing: /var/run/dpdk/spdk_pid335804 00:40:04.769 Removing: /var/run/dpdk/spdk_pid336526 00:40:04.769 Removing: /var/run/dpdk/spdk_pid340794 00:40:04.769 Removing: /var/run/dpdk/spdk_pid346433 00:40:04.769 Removing: /var/run/dpdk/spdk_pid351552 00:40:04.769 Removing: /var/run/dpdk/spdk_pid389175 00:40:04.769 Removing: /var/run/dpdk/spdk_pid393269 00:40:04.769 Removing: /var/run/dpdk/spdk_pid399346 00:40:04.769 Removing: /var/run/dpdk/spdk_pid400714 00:40:04.769 Removing: /var/run/dpdk/spdk_pid402199 00:40:04.769 Removing: /var/run/dpdk/spdk_pid407222 00:40:04.769 Removing: /var/run/dpdk/spdk_pid411576 00:40:04.769 Removing: /var/run/dpdk/spdk_pid419296 00:40:04.769 Removing: /var/run/dpdk/spdk_pid419374 00:40:04.769 Removing: /var/run/dpdk/spdk_pid424134 00:40:04.769 Removing: /var/run/dpdk/spdk_pid424327 00:40:04.769 Removing: /var/run/dpdk/spdk_pid424595 00:40:04.769 Removing: /var/run/dpdk/spdk_pid424943 00:40:04.769 Removing: /var/run/dpdk/spdk_pid425038 00:40:04.769 Removing: /var/run/dpdk/spdk_pid426729 00:40:04.769 Removing: /var/run/dpdk/spdk_pid428552 00:40:04.769 Removing: /var/run/dpdk/spdk_pid430129 00:40:04.769 Removing: /var/run/dpdk/spdk_pid431723 00:40:04.769 Removing: /var/run/dpdk/spdk_pid433527 00:40:04.769 Removing: /var/run/dpdk/spdk_pid435104 00:40:04.769 Removing: /var/run/dpdk/spdk_pid441418 00:40:04.769 Removing: /var/run/dpdk/spdk_pid441867 00:40:04.769 Removing: /var/run/dpdk/spdk_pid444023 00:40:04.769 Removing: /var/run/dpdk/spdk_pid445032 00:40:04.769 Removing: /var/run/dpdk/spdk_pid452284 00:40:04.769 Removing: /var/run/dpdk/spdk_pid454920 00:40:04.769 Removing: /var/run/dpdk/spdk_pid460477 00:40:04.769 Removing: /var/run/dpdk/spdk_pid466084 00:40:04.769 Removing: /var/run/dpdk/spdk_pid474584 00:40:04.769 Removing: /var/run/dpdk/spdk_pid481805 00:40:04.769 Removing: /var/run/dpdk/spdk_pid481807 00:40:04.769 Removing: /var/run/dpdk/spdk_pid500917 00:40:04.769 Removing: /var/run/dpdk/spdk_pid501455 00:40:04.769 Removing: /var/run/dpdk/spdk_pid501989 00:40:04.769 Removing: /var/run/dpdk/spdk_pid502530 00:40:04.769 Removing: /var/run/dpdk/spdk_pid503369 00:40:05.029 Removing: /var/run/dpdk/spdk_pid503906 00:40:05.029 Removing: /var/run/dpdk/spdk_pid504455 00:40:05.029 Removing: /var/run/dpdk/spdk_pid504989 00:40:05.029 Removing: /var/run/dpdk/spdk_pid509359 00:40:05.029 Removing: /var/run/dpdk/spdk_pid509567 00:40:05.029 Removing: /var/run/dpdk/spdk_pid515738 00:40:05.029 Removing: /var/run/dpdk/spdk_pid515889 00:40:05.029 Removing: /var/run/dpdk/spdk_pid518161 00:40:05.029 Removing: /var/run/dpdk/spdk_pid526263 00:40:05.029 Removing: /var/run/dpdk/spdk_pid526318 00:40:05.029 Removing: /var/run/dpdk/spdk_pid531652 00:40:05.029 Removing: /var/run/dpdk/spdk_pid533593 00:40:05.029 Removing: /var/run/dpdk/spdk_pid535571 00:40:05.029 Removing: /var/run/dpdk/spdk_pid536816 00:40:05.029 Removing: /var/run/dpdk/spdk_pid539221 00:40:05.029 Removing: /var/run/dpdk/spdk_pid540359 00:40:05.029 Removing: /var/run/dpdk/spdk_pid549697 00:40:05.029 Removing: /var/run/dpdk/spdk_pid550221 00:40:05.029 Removing: /var/run/dpdk/spdk_pid550727 00:40:05.029 Removing: /var/run/dpdk/spdk_pid553103 00:40:05.029 Removing: /var/run/dpdk/spdk_pid553544 00:40:05.029 Removing: /var/run/dpdk/spdk_pid554006 00:40:05.029 Removing: /var/run/dpdk/spdk_pid558093 00:40:05.029 Removing: /var/run/dpdk/spdk_pid558101 00:40:05.029 Removing: /var/run/dpdk/spdk_pid559554 00:40:05.029 Removing: /var/run/dpdk/spdk_pid560120 00:40:05.029 Removing: /var/run/dpdk/spdk_pid560181 00:40:05.029 Removing: /var/run/dpdk/spdk_pid72446 00:40:05.029 Removing: /var/run/dpdk/spdk_pid73697 00:40:05.029 Removing: /var/run/dpdk/spdk_pid74907 00:40:05.029 Removing: /var/run/dpdk/spdk_pid75601 00:40:05.029 Removing: /var/run/dpdk/spdk_pid76557 00:40:05.029 Removing: /var/run/dpdk/spdk_pid76882 00:40:05.029 Removing: /var/run/dpdk/spdk_pid78347 00:40:05.029 Removing: /var/run/dpdk/spdk_pid78365 00:40:05.029 Removing: /var/run/dpdk/spdk_pid78733 00:40:05.029 Removing: /var/run/dpdk/spdk_pid80420 00:40:05.029 Removing: /var/run/dpdk/spdk_pid81632 00:40:05.029 Removing: /var/run/dpdk/spdk_pid81956 00:40:05.029 Removing: /var/run/dpdk/spdk_pid82271 00:40:05.029 Removing: /var/run/dpdk/spdk_pid82613 00:40:05.029 Removing: /var/run/dpdk/spdk_pid82929 00:40:05.029 Removing: /var/run/dpdk/spdk_pid83223 00:40:05.029 Removing: /var/run/dpdk/spdk_pid83501 00:40:05.029 Removing: /var/run/dpdk/spdk_pid83750 00:40:05.029 Removing: /var/run/dpdk/spdk_pid84444 00:40:05.029 Removing: /var/run/dpdk/spdk_pid87521 00:40:05.029 Removing: /var/run/dpdk/spdk_pid87704 00:40:05.029 Removing: /var/run/dpdk/spdk_pid87871 00:40:05.029 Removing: /var/run/dpdk/spdk_pid88118 00:40:05.029 Removing: /var/run/dpdk/spdk_pid88679 00:40:05.029 Removing: /var/run/dpdk/spdk_pid88813 00:40:05.029 Removing: /var/run/dpdk/spdk_pid89424 00:40:05.029 Removing: /var/run/dpdk/spdk_pid89520 00:40:05.029 Removing: /var/run/dpdk/spdk_pid89858 00:40:05.029 Removing: /var/run/dpdk/spdk_pid90083 00:40:05.029 Removing: /var/run/dpdk/spdk_pid90361 00:40:05.029 Removing: /var/run/dpdk/spdk_pid90394 00:40:05.029 Removing: /var/run/dpdk/spdk_pid91008 00:40:05.029 Removing: /var/run/dpdk/spdk_pid91174 00:40:05.029 Removing: /var/run/dpdk/spdk_pid91448 00:40:05.029 Removing: /var/run/dpdk/spdk_pid95449 00:40:05.029 Removing: /var/run/dpdk/spdk_pid99966 00:40:05.288 Clean 00:40:05.288 14:08:02 -- common/autotest_common.sh@1451 -- # return 0 00:40:05.288 14:08:02 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:40:05.288 14:08:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:05.288 14:08:02 -- common/autotest_common.sh@10 -- # set +x 00:40:05.288 14:08:02 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:40:05.289 14:08:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:05.289 14:08:02 -- common/autotest_common.sh@10 -- # set +x 00:40:05.289 14:08:02 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:05.289 14:08:02 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:40:05.289 14:08:02 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:40:05.289 14:08:02 -- spdk/autotest.sh@395 -- # hash lcov 00:40:05.289 14:08:02 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:40:05.289 14:08:02 -- spdk/autotest.sh@397 -- # hostname 00:40:05.289 14:08:02 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-22 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:40:05.548 geninfo: WARNING: invalid characters removed from testname! 00:40:27.490 14:08:22 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:28.501 14:08:25 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:30.407 14:08:26 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:31.786 14:08:28 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:33.691 14:08:30 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:35.067 14:08:31 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:36.969 14:08:33 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:40:36.969 14:08:33 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:36.969 14:08:33 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:40:36.969 14:08:33 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:36.969 14:08:33 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:36.969 14:08:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:36.969 14:08:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:36.970 14:08:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:36.970 14:08:33 -- paths/export.sh@5 -- $ export PATH 00:40:36.970 14:08:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:36.970 14:08:33 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:40:36.970 14:08:33 -- common/autobuild_common.sh@447 -- $ date +%s 00:40:36.970 14:08:33 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721909313.XXXXXX 00:40:36.970 14:08:33 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721909313.G598rF 00:40:36.970 14:08:33 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:40:36.970 14:08:33 -- common/autobuild_common.sh@453 -- $ '[' -n main ']' 00:40:36.970 14:08:33 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:40:36.970 14:08:33 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:40:36.970 14:08:33 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:40:36.970 14:08:33 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:40:36.970 14:08:33 -- common/autobuild_common.sh@463 -- $ get_config_params 00:40:36.970 14:08:33 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:40:36.970 14:08:33 -- common/autotest_common.sh@10 -- $ set +x 00:40:36.970 14:08:33 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:40:36.970 14:08:33 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:40:36.970 14:08:33 -- pm/common@17 -- $ local monitor 00:40:36.970 14:08:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:36.970 14:08:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:36.970 14:08:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:36.970 14:08:33 -- pm/common@21 -- $ date +%s 00:40:36.970 14:08:33 -- pm/common@21 -- $ date +%s 00:40:36.970 14:08:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:36.970 14:08:33 -- pm/common@25 -- $ sleep 1 00:40:36.970 14:08:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721909313 00:40:36.970 14:08:33 -- pm/common@21 -- $ date +%s 00:40:36.970 14:08:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721909313 00:40:36.970 14:08:33 -- pm/common@21 -- $ date +%s 00:40:36.970 14:08:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721909313 00:40:36.970 14:08:33 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721909313 00:40:36.970 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721909313_collect-vmstat.pm.log 00:40:36.970 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721909313_collect-cpu-temp.pm.log 00:40:36.970 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721909313_collect-cpu-load.pm.log 00:40:36.970 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721909313_collect-bmc-pm.bmc.pm.log 00:40:37.908 14:08:34 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:40:37.908 14:08:34 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:40:37.908 14:08:34 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:37.908 14:08:34 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:40:37.908 14:08:34 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:40:37.908 14:08:34 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:40:37.908 14:08:34 -- spdk/autopackage.sh@19 -- $ timing_finish 00:40:37.908 14:08:34 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:37.908 14:08:34 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:40:37.908 14:08:34 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:37.908 14:08:34 -- spdk/autopackage.sh@20 -- $ exit 0 00:40:37.908 14:08:34 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:40:37.908 14:08:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:40:37.908 14:08:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:40:37.908 14:08:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:37.908 14:08:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:40:37.908 14:08:34 -- pm/common@44 -- $ pid=572020 00:40:37.908 14:08:34 -- pm/common@50 -- $ kill -TERM 572020 00:40:37.908 14:08:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:37.908 14:08:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:40:37.908 14:08:34 -- pm/common@44 -- $ pid=572022 00:40:37.908 14:08:34 -- pm/common@50 -- $ kill -TERM 572022 00:40:37.908 14:08:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:37.908 14:08:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:40:37.908 14:08:34 -- pm/common@44 -- $ pid=572024 00:40:37.908 14:08:34 -- pm/common@50 -- $ kill -TERM 572024 00:40:37.908 14:08:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:37.908 14:08:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:40:37.908 14:08:34 -- pm/common@44 -- $ pid=572049 00:40:37.908 14:08:34 -- pm/common@50 -- $ sudo -E kill -TERM 572049 00:40:38.167 + [[ -n 4142467 ]] 00:40:38.167 + sudo kill 4142467 00:40:38.177 [Pipeline] } 00:40:38.196 [Pipeline] // stage 00:40:38.202 [Pipeline] } 00:40:38.219 [Pipeline] // timeout 00:40:38.225 [Pipeline] } 00:40:38.243 [Pipeline] // catchError 00:40:38.249 [Pipeline] } 00:40:38.267 [Pipeline] // wrap 00:40:38.274 [Pipeline] } 00:40:38.289 [Pipeline] // catchError 00:40:38.297 [Pipeline] stage 00:40:38.299 [Pipeline] { (Epilogue) 00:40:38.313 [Pipeline] catchError 00:40:38.314 [Pipeline] { 00:40:38.328 [Pipeline] echo 00:40:38.330 Cleanup processes 00:40:38.335 [Pipeline] sh 00:40:38.619 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:38.619 572148 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:40:38.619 572469 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:38.632 [Pipeline] sh 00:40:38.912 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:38.912 ++ grep -v 'sudo pgrep' 00:40:38.912 ++ awk '{print $1}' 00:40:38.912 + sudo kill -9 572148 00:40:38.922 [Pipeline] sh 00:40:39.205 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:39.205 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:40:44.478 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:40:48.683 [Pipeline] sh 00:40:48.964 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:48.964 Artifacts sizes are good 00:40:48.978 [Pipeline] archiveArtifacts 00:40:48.985 Archiving artifacts 00:40:49.187 [Pipeline] sh 00:40:49.470 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:40:49.485 [Pipeline] cleanWs 00:40:49.495 [WS-CLEANUP] Deleting project workspace... 00:40:49.495 [WS-CLEANUP] Deferred wipeout is used... 00:40:49.503 [WS-CLEANUP] done 00:40:49.504 [Pipeline] } 00:40:49.521 [Pipeline] // catchError 00:40:49.532 [Pipeline] sh 00:40:49.815 + logger -p user.info -t JENKINS-CI 00:40:49.824 [Pipeline] } 00:40:49.836 [Pipeline] // stage 00:40:49.840 [Pipeline] } 00:40:49.854 [Pipeline] // node 00:40:49.859 [Pipeline] End of Pipeline 00:40:49.888 Finished: SUCCESS